Disclamer. The publication is a logical continuation of the first part, which is available by reference. Material
As recently as 15M, marketers from Intel withserious people showed the world the good news in the form of an announcement of an incomprehensible at that time either a decision, or a product, or technology under the 3D Xpoint market label. It was about a new class! memory, which was supposed to be embodied in the modules named Optane. Press releases promised at least a miracle with the subsequent canonization of those who bought - the promised characteristics of the product looked so bold and implausible. If the promotion of a similar product to the market had been entrusted to the evangelicals from Apple, the “fan with experience” would urgently have to look for a replacement or at least a rocking ticket with steroids.
Judge for yourself - the memory promised 1000 times faster than NAND, 1000 times stronger than the same NAND and 10 times denser and, accordingly, more compact DRAM. I guarantee it with a screenshot!
In the structure of the system, a new product was taken awaypositioning between the fact that the masses are more or less clear, as RAM and solid-state drives. Some Michurin breakthrough hybrid was expected with indicators beyond that time Iops Prices are not modestly voiced.
A little deeper study of press releases on the example of corporate solutions hinted that Intel invented a certain type cache, on technology so fast that they cancache even growing solid-state drives due to lower prices! For an ordinary user who could only join the world of fast carriers, everything looked not so much chocolate as incomprehensible - just yesterday, in their understanding, the top performance in the world of drives was SSD, and then suddenly Optane. Of course, technical specialists understood everything in advance, but such, as we know, minority. The rest expected really something amazing foundations of storage and access to information.
The reality turned out to be somewhat more prosaic. Against the background of corporate clients, the retail future was offered starting from NVME modules for 16 GB for more than 40USD and 32 GB for almost 80USD for use in the form of a cache absolutely in an unconventional way.
The miracle was supposed to be a crutch mainly forslow drives on hard drives, of which in the park of users on planet Earth is still the majority. So Optane was supposed to be at first glance a kind of buffer between fast memory in RAM and a slower drive. RAM is expensive, decent sized SSDs are expensive. Why not cache brake media with a revolutionary piece of hardware?
But it was suggested that it would be somewhat illogical.in a way. The main feature of the implementation of the caching drive Intel assumed its use in systems based on 7th generation processors (and not all) and above i. The target audience was recorded only by the buyers of new and expensive iron, who almost guaranteed to plan the system SSD. As a result, the proposed “cash at a high price” in the new modern system was trivially unnecessary. Intel tried to shift the accents, which, they say, players will buy a small SSD for the system, and the HDD with large releases of igori will still have to speed up something, and this is where Optan can be added.
But “did not take off” because The old technology park defiantly ignored, and the new one was very late due to the good penetration of large amounts of RAM and SSD to solvent enthusiasts, and then to shirnarmasy.
Rejecting sarcasm it should be noted that in itselfthe technology turned out and allowed to create carriers, though not such a fabulous speed as expected, but in fact - unprecedented speed, especially in the work with random access. Therefore, rather quickly, with an eye on the corporate sector on the basis of the 3D Xpoint, the voluminous Optane SSD 800P was rolled out and the Optane SSD 900P was absolutely indecently fast. But the prices ... and yes, with a 1000-fold excess of endurance clearly got excited although in my personal opinion and 3dnews are testingsomewhat unrepresentative, and the result of the link clearly hints that the dump of the carrier was rather programmatically probable and was realized according to a predetermined algorithm after reaching a predetermined number of cell overwrites. But without analyzing the control program, these statements are unverifiable.
Summing up the marketing company canto say that Intel and its colleagues were able to create very fast corporate-class drives, which in the retail sector can only be a niche product due to the price - either for wealthy professional enthusiasts or as fragments for unconventional caching tasks.
Caching is not so simple. The implementation, although late, could nevertheless be useful for millions of old systems — indeed, even the youngest Optans should have enough speeds for serious support of disk systems of the past, but Optan there was no place to insert there — NVME was not supported by all motherboards, but Intel did not bake any other implementation. In the same systems, where the magic wand had a lot to plug in, but the processor did not fit the logic, it seemed to be the usual carrier of ridiculous volume for today. In general, the manufacturer artificially limited the circle of potential buyers to those who were not interested in the product.
And what about caching? “He was” before Optan. Modern operating systems caching data exchange by standard means with different degrees of efficiency is imperceptible to the user. In times of shorter RAM, a more or less understandable analog (the analog is that the implementations are very different, but the meaning is similar) conceived by Intel on the basis of Optan was called in Windows ReadyBoost and was implemented naturally with flash drives (not onlybut mostly), which could be much faster than spinning hard drives. USB3.0 and all that. Particularly advanced enthusiasts could pop as a ramdisk booster, but then the question of the effectiveness of spending expensive memory on such an action was already becoming.
Spherical oldfags in vacuum remember the time whento run, say, DOOM from ramdrayv was riding fat not every machine had enough RAM for this. This made it possible to completely eliminate loading brakes from a very slow disk in mass battle scenes and not to see the blinking diskette in the corner. Those. The bottleneck was already NZHMD, but there were no alternatives in the consumer segment, and ramdrayv was almost the only way to look well beyond the limits of the proposed NZHMD speeds. It was also possible to place temporary folders there and even run working software, but the issue of non-volatility spoiled everything. As a result, mutants like the Gigabyte i-RAM were born on bezrybe, but with the increase in the availability of RAM, they were pushed to the periphery of the market and forgotten by everyone, except for very rare geeks.
In the corporate segment, caching software has been around for a long time, it works well and is expensive. But there the issue price is secondary.
Such products as PrimoCache successfully work in the retail segment. How it all works in practice, you can see here It works wave well. Especially considering that in this case a 2-tier caching system is implemented - before stopping the system to work with the disk, the data is placed to the RAM cache and background is written to the drive without appealing discomfort. If the data has not yet gone to the drive, then re-reading them will be carried out from the cache in the RAM almost instantly, and reading the most popular and voluminous data when working with PrimoCache comes from the duplicated SSD copy, which in this program is called L2 cache. Moreover, the PrimoCache operation scheme is relatively safe, if you don’t get carried away by pushing back the time horizon of the record delayed on the HDD to the future.
It would seem - why did Intel not follow this path? What prevented the availability of optan-caching ancient systems and secure a large market? Apparently, first of all, orientation towards a corporate client. But then why brew this mess at all? Here, by the way, it should be noted that Intel, although well implemented the multi-level cache technology that has been known for a long time, could not help but be noted for the originality of the implementation in a bad sense. The fact is that, judging by the manuals, the caching of HDD via Optan in those systems where it could be used, is very original - in contrast to the duplication of data popular with the OS on the disk, as it was done in PrimoCache, Intel decided not to duplicate data , and carry.
Those. in theory, with the collapse of the caching system in the case of PrimoCache, the entire source information on the cached media remained completely, minus the fact that it did not manage to be written from the cache to the RAM, and after that the system started up completely, though from a slow, but lively and complete carrier, But in the case of Intel, when the system collapsed on different physical media, there were scattered pieces of data, and whether it is possible to restore something in this case is a big question, but there is nowhere to try.
Against this background, AMD did not lag behind, presentingStoreMi conceptually similar solution without forgetting the limitations on the processor line and logic: X399 or 400, but allowing you to use any SSD as caching. It is noteworthy that again it was intended to push the decision to those to whom it is not particularly necessary - the buyers of the new lines. AMD saw it somehow.
Against this background, PrimoCache for any systemsIt looked somewhat more universal than the decisions of market leaders that were unreasonably rigidly tied to hardware. To the new gland. Iron is not cheap, especially in the case of Intel. But more on that below.
It should also be noted that caching work withFree RAM is offered for free use by Сrcial and Samsung companies - the main requirement is the availability of the system disk of these manufacturers. In the case of the Сrcial cache, the Momentum software allows you to work in addition to the Crucial system SSD with other SATA media in the system.
Those. The choice is large enough and against its background the closeness of Intel’s solution is not entirely clear, especially considering the truly outstanding performance characteristics of their products, which can only be caught up the best representatives of NAND-competitors in the multichannel SLC mode.
Let's summarize the subtotal. The user today has a very wide selection. It is possible to divert cheap SSDs to the system, which today are massively available at unbelievable speeds in the case of NVME for relatively reasonable money. This is the most correct, logical and effective choice of today. Depending on the needs for storing cold data, either HDD or additional SSDs, or RAID arrays from the first and second to taste and the need for reliability, can be used. Well, or external storage, NAS, clouds, etc. And given the fact that such market leaders as Samsung also offer a native cache bundled with the product - Intel caching is clearly passing by the cash register.
The second direction can be assumedcaching that Intel and AMD saw, but both are tied to hardware and implemented ambiguously. As a result, the solvent fall into the category above, and the rest - the audience. And at that time DDR4 was still so inappropriate, the price increase was slightly slowed down by the global transition to new platforms for those to whom the old ones were still relatively normal.
Those who, in general, should be the targetThe audience, namely the owners of the old park, remained undeservedly forgotten by mainstream producers. And although the mainstream has other goals, nevertheless, as noted, in the world PC park, the main system disk still has the old and slow HDD, although many other parts of the system would allow it to be quite working even by modern standards.
Just do not ignore the fact thatIt’s not always possible to replace HDDs with SSDs for quite a few of these machines due to subjective circumstances, which can mainly consist in the fact that everything is rehearsed and familiar on the machine, passwords are entered, files are located in convenient places, and reinstalling the system with 0 can be stressful all senses. Direct moving by cloning is also not always convenient, especially if it is not a specialist who does not forget, for example, to turn off defragmentation on a schedule. In some cases, moving to a new iron can generally be a catastrophe on a local scale, especially in the case of accounting type. It is to this audience that all the benefits of Optana would be, but they were excluded from the list of potential consumers with a bold black marker, i.e. artificial platform restrictions and price, which makes this venture even in perspective against the background of falling prices for traditional SSDs, would seem to be completely unpromising.
In order to once again illustrate the situationWith caching the old and slow HDDs, taking into account the current capabilities of the same PrimoCache and 16GB Kingston SSD from the Kingston, I’ll give you just a few screenshots carefully made for this purpose on an ordinary laptop, where typical SONO and entertainment are done with uptime of approximately 90 days . 3 months. There are no resource-intensive tasks such as video transcoding or modeling of large-scale projects. I’m sure I’m not mistaken if I assume that about 23 of all home and office PCs are used in a similar scenario.
As can be seen from the program report for 3 months fromThe system disk of the OS tried to read 664 GB of information and 643.61 of them accounted for the shared cache ie. almost 97%. And a third of all data, or almost 222 GB, was read from the L2 cache. Those. in fact, from a slow HDD, the system read only 3% of the data - everything else was read from the RAM and the caching SSD with the appropriate speeds and user interface response. In everyday terms, Windows did not slow down.
The second important element in statistics istotal request for recording by the system - almost 369 GB of which only 223.43 or 62.1% reached the real carrier. The fact is that at some point the writing of a part of the data loses relevance at that moment, if the physical record is not yet implemented from the cache to the RAM, and the delay time is set manually and I have 60 seconds, then this data does not reach before the carrier. In our case, thanks to caching, we did not record more than 100 GB. In the case of SSD, this could be called saving his resource, but today this indicator is of little relevance.
At the end I’ll note that cachingIt happened on a laptop with 16GB of RAM with a caching SSD of 16 GB and these same 16 cache on a solid-state drive ended up in about 60 days of uptime. Further, the irrelevant data were replaced with new ones, but if the SSD were larger, it would eventually duplicate almost the entire main drive in the part from which active reading goes more than 2-3 times. Those. If we put a 128 GB SSD in the cache, then the games would slowly be duplicated there. This would be Optan.
But in reality, as you know, everything is a bitnot like it really is. The younger versions of the miraculous Optan suddenly began to appear at flea markets at a price of about 10 USD, and often in unused form, i.e. new! According to Intel, they cannot be used for caching outside the native ecosystem. In the best case, on incompatible systems, they can be used as a normal NVME drive in a deficient slot and with a funny volume of 16 or 32 GB. As the film classic used to say - this is not serious!
And it can be understood - well, who needs, thoughcosmically fast 16 GB on the background of a settlement that tends to 1 likelihood in a high-speed slot for much faster and larger, for example, Samsung upper series for relatively adequate money? Well, really competitive drives are doing today by Samsung, and if they are also in a raid .... but we got distracted.
However, the way to use Optan as Intel saw it, but completely unreasonably closed the window of opportunity for users of old systems, there is
And now we will illustrate it enoughAntique, but nevertheless quite still meeting in a SOHO platform. This is not about a specific hardware, but rather a period of transition from IDE interfaces to SATA in the early stages. For these purposes, I found quite a working haute couture. from ASUS - M2NPV-VM.
It was fairly inexpensive, but quite good.equipped at the time base for office or home machine: integrated video with the ability to connect 2 monitors, 8 GB of supported DDR2 memory, video output, 2xIDE, 4xSATA2, and even LPT (financiers will understand the then value), not including USB, network, audio and 1394a.
So far back in 2006, I collected a very good one.An office on the boards of this series, which still solves financial and accounting problems, though in a slightly upgraded form - the platform allows you to do this cheaply and without moving the OS with all the add-ins.
The weakest point of assembly on today'smeasures and profile of the task is just a drive. The easiest way to solve the issue is to clone the system disk on the SSD and work with the carrier that is fast and inexpensive today, but there is a nuance - although the SSD resource today can be considered a non-problematic issue, nevertheless, the SSD fails unexpectedly and irrevocably.
With regular backups, it’s notplays no role, but those do not all and not always, especially at home. In the office, of course, he must - simply must - be a responsible enikikishchik, but in practice he also does not sit in everyone and not all day, but where he sits, he doesn’t save on matches, and the background backup is usually raised and running offline.
It is easy to see that the test charge hasinterface SATA2, which gives us a theoretical 300 megabytes of bandwidth per second unattainable for mass drives of that time. The typical hard drive of the time in today's popular test looked like this
Heartbreaking sight. That is how the donkey Ea would have characterized the result of the race.
Modern colleagues do not look better.
It is not hard to guess that we don’t need such hockey today. But what if the accounting department is spinning on these ancient brakes, and moving for some reason is inappropriate?
The answer is actually exactly one and a half. they have common roots. By reference we figured out that to put a crutch on a slow butnot failing suddenly, the hard drive can even be through a small SSD using PrimoCache. The software will create a buffer from the RAM, from which the data will be cached on the SSD and only then the background data will be written to the HDD. In the case of reading after the initial reading, the popular data will be read either from the fast cache in RAM or ... from a copy on a caching SSD a little less quickly than from RAM, but with quite solid speed. The general logic is recalled below.
The average and most inexpensive 16 GB SSD with SATA interface will show these speeds on the system in question.
This is an undoubted breakthrough for reading from the cache -at about 60! times faster than HDD. Those. popular data will be duplicated on a caching SSD and when accessed it will be read at SSD speed. Writing through such a cache will happen with a delay and via SDD at 30! times faster. At the same time, there is no risk of an unexpected failure of such SSD - if this happens, the system will quietly boot from the main CDM, ready for operation. Slow, but whole. Only data that will fail will not be written from the cache to disk can suffer.
I would like to note that linearWe are not interested in recording speeds for 2 reasons. 1 - the vast majority of household user operations with drives are characterized by a query depth of 1–4. And basically it is 1. This does not mean that there are no long queues at all, but they occur infrequently and the linear speed will be important for us only when the target record and reading a large data array, which in SOHO is often not so. For example, let's take Intel's research on our own employees with the average depth of read requests.
But the queue depth at the launch of the popular software
The important thing here is that when speeding up work withquery queue length of 1-4 radically accelerates the responsiveness of the system to the user. The user is not so critical times for recording movies in minutes, as friezes in fractions and seconds when working with a familiar interface. Caching these second friezes eliminates up to the processor's ability to handle a user task. Those. the disk ceases to be a bottleneck.
It would seem, where does Optan? Why did we read so much text? 1 ... no 2 tea reading - it turns out everything is possible!
Of course, attach NVME to a board thatI saw dinosaurs, there's nowhere directly, and it would seem - there’s no need - after all, we can implement caching, which is what we are talking about, using absolutely any conventional SATA SSD, even on SATA2. This interface is not a restriction for tasks of arbitrary work with small files, of which the absolute majority. Those. Space linear speeds of modern SSD in the scenario of working with arrays of small files are not in demand, as dozens of processor cores that couldn’t utilize their software for a long time, but it looked and looks in synthetic testing is always cool!
However, the board is equipped with a PCI Express x16 and x1 slot. And if in x16 you can attach a video card and even play a little game from the not very old past, then x1 often remains free (if not to mine) and its one line in theory should have a bandwidth of 250 megabytes per second, which is even less than SATA2.
However, this is pciex, which means you can try to install an NVME device there!
Of course, it will not work directly because the physical implementation of contact groups in pciex and m.2 is different. But maybe there are options?
They are and kindly made in Asia and presented on Ali for not very expensive. It looks like this.
Order, wait, hope.
In the meantime, along the way, we are looking for the experimental Optan and we find it on the nearest Internet flea market for exactly 10 cu.
Yes, I bought one specifically for this material, but apparently it was not the only offer and "there will be more of them."
For writing material, I went to picofinanciala risk - because in case of failure neither the Optan nor the adapter will be particularly needed because I use SSD with complete caching software and there will be no place for me to use them - maybe again to the flea market, but ...
Thank you, Kharkiv citizen, for honest shipment -I did not have the opportunity to check in the mail, but Optan turned out to be alive and you are now participating in the writing of this text. Perhaps, if you are reading us, you will write in comments where these “discs” came from at that price, although we already assumed above that they were really useful for those who were installed with the system for some reason — a little and in flea markets they appear at prices very different from retail. Strong.
So, in theory, we should put Optan inChinese adapter, adapter in pci-e x1 on the matlpat, turn on the motherboard and hope that the whole construction will start. If the manuals do not lie, then we will get something. In the meantime, this character observes manipulations with interest, suspecting us of piracy of the idea.
“There is an island on the sea on the ocean, an oak stands on that island, a chest is buried under an oak tree, a hare is in a chest, a duck is in a hare, an egg is in a duck, and a needle is in an egg”
Well. They collected our chest ie. system unit and hopefully press the power button. Although in theory it should work, but at the time of production of our motherboard, no one in their right mind could build such structures, and therefore no one gave any guarantees of success.
3-2-1 ... let's go.
And then the first stop. BIOS is expected to ignore our craft. There is nothing particularly unexpected. Desiring to start with such appendicitis would be too presumptuous. We go further.
Forgot to clarify. For completeness, we chose Windows 7 Maximum 64-bit Sp1 as an operating system.
And in fact received a hole from a donut - in the list of devices our engineering solution turned out to be, though visible, but unidentified.
And these two are around the corner.
Over the years, I must admit, I forgot thatWindows 7 did not immediately learn to understand NVME. Natively, she had every right to do this - in 2009, no one had yet seen such outlandish pieces of iron - until relatively modern implementations were still 5 years old and the corresponding crutch Microsoft composed only later. Crutch Description reference but bad luck - by now the patch number 29990941 for the seven is no longer available, but is it possible to frighten us with such a trifle?
And after a few seconds, we find what we are looking for at thehotfixshare.net/board/index.php?showtopic=22296, dragging it onto the globe ie. OS and pending the result of a reboot.
Restarting is in the system us a new identified device in the section of drives. Here, those interested can familiarize themselves with the complete list of drives that were used during testing.
Their faces radiate joy and serenity - effectachieved (with). Why the system so strangely identified the Optan to us is not so interesting as to look into the disk manager to make sure how things are there. Looking ahead to say that the media had to be formatted and he saw something like this.
Optan top. On this screen, I specifically brought two disks - the second one, this is the usual 16GB SSD from Kingston, which was used in conjunction with PrimoCache in this system for caching earlier. As you can see the 16 GB concept is conditional and in the media. This is due primarily to the fact that manufacturers consider gigabytes and megabytes multiple of 1000, not 1024. Secondarily, the numbers we see show that in both cases the media controller performs at the firmware level Over-Provisioning ie leaves part of the usable media volume inaccessible to the user in order to ensure the reliability and stability of speed indicators. How exactly the controller uses the hidden area is not important to us. It is important that in the case of Optan, “underfilling” is quite noticeable at about 2.5 GB! Those. the use of younger Optans for system purposes can only be said for cases of ultra-compact installations of Windows 7, which for practical purposes may make sense well in very narrowly specialized cases. To come up with such productive, so that M.2 staff was also - that is still a task. And if there is no full-time M.2, then through such a construction as here, not every UEFI will see Optan at the POST stage as a carrier in general — that is, There is no talk about booting from it. In our case, as a drive, Optan appears at the Windows driver loading stage. And from when exactly it becomes clear to the OS, now the whole success of our venture depends.
Now, when we did richer the Optan onthe anti-quarter board with the help of a crutch, it's time to run the well-known test. In advance, it should be said that in the case of such a specific board like ours, when evaluating the result, an amendment should be made to the drivers and hardware. Pay attention - we are now testing the Optan as a normal drive.
Start, takeoff, result. Basic manufacturer specifications are listed on the right.
For the museum exhibit platform the result is veryeven decent. We will try to understand it in more detail. Let me remind you that these figures interest us as orders in the first place, because their repeatability may differ on other systems. And the newer the system is, the higher the performance will be!
Maximum linear reading speed a bitI did not reach the theoretical maximum of our pci-e x1 at 250 megabytes per second in one direction. And in terms of reading up to 4kb blocks with a depth of 1, the figure almost does not change.
Online shaft and scattering test speeds Opatnovof different caliber and the reader himself can compare our result with that obtained on the top-end modern hardware, and we still state that for about 10 years, on the basis of an ancient, like mammoth wool, board, we see just a space jerk in the growth rate of work with 4kb blocks compared to with HDD of that period!
Once again for clarity.
In part 4kb blocks with a queue depth of 1 heightis about 312 times! Given the fact that these operations constitute the majority in everyday work, the speed of response of a system with such a carrier should rest only on the processor. But this is if such a disk is systemic, and they are cosmically of the right amount. In this regard, it is necessary to think about how to transfer part of the tasks of reading system files and other software to Optan. The answer will be from the past material - PrimoCache and other caching programs that can use fast drives as a buffer.
In our case, PrimoCache regularly saw the construction and was ready to use it for caching.
In the PrimoCache system window it looks like this.
So, we come to the completion of all coursework. PrimoCache regularly identified the fast Optan as a potential L2 cache and is configured to use it.
The moment of truth will come when you restart. we do not know at what stage of the OS boot the Optan is recognized as a carrier and, accordingly, PrimoCache begins to use it, and PrimoCache can use the caching SSD already during the boot process - part of it comes from the caching SSD, ie much faster everything becomes at startup. Effect in case of reading caching via SATA SSD at boot stage here was installed practically and it is obvious.
Reboot and ... Optan in the adapter is quitemanages to get to work - we finally set the ability to use it as a caching disk for the old system with HDD using PrimoCache.
In the end, as we remember, after a whilePrimoCache will duplicate popular data on Optan and their further reading will occur from it at a very high speed. Depending on the version of PrimoCache, you can cache both read and write. The key difference from the model that Intel offered for Optan is that Intel's algorithm transfers the OS data that is popular with the Optan, and PrimoCache duplicates it. Those. in the case of PrimoCache, if the cache carrier is broken, the data will not disappear - the system will return to loading with the HDD and slow work with it.
What are the final figures we have as a result of the idea? Screenshots below.
Let's try to figure them out. In our system, PrimoCache needs 2 GB of antique DDR2, and within this volume, any operations with the “disk” subsystem occur essentially in RAM with the appropriate speed. It should be understood that on more modern platforms and speed will be more. When work with data is completed, they are background and imperceptibly deposited (in some cases through Optan - more on this below) on a slow HDD, without inconveniencing the user. Logically it is a kind of ramdrayv. In the case if the data array is processed more than the allocated amount of RAM, then it will not fit in ramdrayv and go straight to the HDD, unless it is divided into files, some of which fit into it. Two measurements at the bottom is a great show. It is obvious that caching is not completely useless here - even in a difficult case, the data runs to the HDD much faster, and from it - at least 2-4 times faster than from the uncached one. Those. partially the cache manages to process something even in the case of a large array, especially when writing, using the Optan. But, as we wrote above, such loads are not frequent at home and in the office and therefore waiting for one-time processing of such volumes is uncritical, but the response of the system in the case of typical work becomes radically faster!
In order to understand what contribution to the final result our Optan makes, we will disable the cache in PrimoCache in RAM and leave only Optan.
As you can see - the situation has changed a lot.
For cases of 50 and 100 megabytes, weWe observe a low sequential read speed and strong growth in further passes. This can be explained by the fact that the newly created data set is being read for the first time, which is simultaneously duplicated in Optan and is read from it later, but the system does not see this and “thinks” that reading continues directly from the C: drive. Larger test samples without a cache in RAM, apparently, do not have time during the passage to fully replicate to Optan, which leads to results that can be observed starting from 500 MB.
I think, be a test version of the strip width forpci-e x1 records, then the moment of performance drop would fall on it, and in modern systems it lies far beyond our 250 mbps one way. That is why it is important for PrimoCache to use a bunch of caches in RAM and on a fast drive. Once this software has been configured, it almost does not require any further attention in the future, and in the event of an unforeseen shutdown of the OS, the collection of the caching array begins anew without user participation and in the background. A real array after a failure is collected much faster than the primary one — apparently, there is some description of it in the software and, in fact, the already existing data is duplicated. The only thing that will be noticeable is a temporary return of the speed of work to the HDD level, but, as practice shows, the acceleration is felt after a few minutes of work.
Probably something like that cachingsmall Optan as a product by Intel company. Then the idea was taken over by AMD, but, apparently, not because it was really necessary, but just to be no worse than that of a competitor - the marketing department received an award and will be. Both giants tied the technology to their own hardware in the case of Intel or a bundle of iron and in some places paid software in the case of AMD.
But everything can be run and bypassing logicoligopolies are quite regular tools. Of course, PrimoCache is not the only software that can cache slow drives via RAM and fast drives, but we consider this example as a continuation of the past thematic of the material Yes, and some corporate software products in this area are not available even for testing without an iron key.
Speaking of testing, our little test should be treated rather as an indicator of a trend, rather than as an exact result.
But why do we have a gun on the wall? SSD Kingston on 16 gb? A third tea is offered to those who have read to this place
In the past, the material we cache the HDD isthrough him. It was a penny variant, with which the flea markets were overwhelmed and they would be sold as well. at today's prices for SDD 16gb simply can not be expensive - flash drives are cheaper. Cheaper, but not faster. And even a 16gb SSD, as can be seen from the test above, is much faster than anyone, even a modern consumer HDD. Of course, there are also hybrid SSD-HDDs, but the cache cache in them is rarely significant and the user is not offered to customize their internal logic.
After connecting the "fabulous" Optan, as a caching disk, Kingston remained unclaimed with us, but we will find use for it.
In our system, as we recall, only 8 GBDDR2 RAM, part of which we allocate PrimoCache to the first level cache. Through it, the data runs to the second-level cache in the form of Optan, and then it is deposited on a slow HDD. In connection with the amount of RAM and appetites of today's programs, browsers in particular, we will not be able to refuse the swap file, and it is not necessary. Versions of PrimoCache above 3 can cache and swap ie paging file, but in our case, we can even more unload the system disk and even the cache by placing a swap on a separate SSD. We will have this little Kingston.
Oh, they will shoot! (with)
If you get a glimpse into threads about the logic of working with swapsin the Windows environment, it is possible to find an explanation in the depth of Microsoft comments that if a swap system is located on several disks, then Windows will rely primarily on the one that is more free from queries. Those. Having placed the swap on a separate SATA SSD, we will additionally unload the disk subsystem. Just do not turn off the swap on the system slow HDD. some software still requires its presence, and if Microsoft is telling the truth, then the swap on a fast solid-state disk will be much more productive, because it, to all, is located on a physically different than the system media. By the way, it can also be cached, but this is for very enthusiastic people and, according to the same Microsoft, the reading that we are trying to organize from a separate SSD prevails in the work with the swap.
So we got Frankenstein, wherethe system disk is an old HDD, which cannot be touched according to the conditions of the task, it is cached using PrimoCache in transit through RAM by Optan, and the swap lies on a physically separate SATA SSD + 2 GB of HDD for cold data, backups, and so on.
As a result, typical tasks after 2-3 executionsare supported by reading from Optan and then they are executed very quickly. We are not interested in specific milliseconds. Subjectively, it is radically faster than the original version with a system HDD and noticeably faster than caching only via SSD. Both at start, and in work. And all this without the risk of data loss in case of a defect in Optan or another SSD!
The solution is certainly not massive, but extremely effective and I am sure that you most likely will not find the second such experiment on antiquities with Optan