The first flash-based solid-state (SSD) hard drives were introduced 12 years ago, but until now, this technology is expected to replace data center mechanical (HDD) hard drives, at least in the main storage area. Why does it take so long? After all, flash drives with random I/O are up to 1,000 times faster than HDD drives.

This is partly due to the fact that the system is ignored, but it is only focused on storage elements and CPUs. This leads the industry to focus on its cost per TB, and the real focus should be on the total cost of the solution with or without flash. In short, most systems are I/O bound, and using flash memory inevitably means less systems are required under the same workload. This usually offsets the cost difference.

The turning point in the storage industry comes from all-flash arrays: simple plug-in devices that can immediately dramatically increase the performance of storage area networks (SANs). This has evolved into a two-tiered storage model where SSDs are the primary storage tier and slower but less costly HDDs are used as secondary storage tiers.

Why are SSDs the main storage technology? These are the six reasons

Applying a new flash model to a server can provide higher server performance, just as SSD drives cost less than the price of a corporate hard disk. With good economics and better performance, SSD hard drives are now the preferred choice for the main storage layer.

Today, people are seeing the rise of NVMe (High-Speed ​​Non-Volatile Memory) technology that is designed to replace Serial Attached SCSI (SAS) and Serial Advanced Technology Attachment (SATA) as the primary storage interface. NVMe is a very fast, low overhead protocol that can handle millions of IOPS far beyond its predecessor. Last year, NVMe's pricing was closer to SAS drive prices, making its solution more attractive. This year, people will see most of the server motherboards that support NVMe ports, possibly SATA-Express that supports SATA drives.

NVMe is an internal component of the server, but the new NVMe over Fabrics (NVMe-oF) approach extends the NVMe protocol from the server to the NVMe drive array, as well as all-flash storage and other storage devices, and complements the new Hyper Converged Infrastructure (HCI) The model is used for cluster design.

However, these developments are not over. Storage product vendors are committed to producing 32TB and 64TB SSDs in 2018. This is much larger than the current largest 16TB HDD drive, and HDD drives will remain in trouble until the heat-assisted magnetic recording technology is solved.

However, the cruel reality is that the SSD hard disk realizes the external dimensions that the HDD hard disk drive cannot achieve. The size of a large HDD hard disk is 3.5 inches. Today, there are 2.5-inch 32-inch SSDs and new-size SSDs such as the M2.0 and "ruler" (slim M2.0), which will provide a large amount of storage capacity for small devices. Intel and Samsung can achieve PB-class storage capacity in a 1U-sized device.

The secondary storage market is slow and cheap, which hinders the widespread use of SSD drives in the secondary storage market. The rise of 3D NAND and new Quad-Level Cell (QLC) flash memory devices will largely narrow the price gap, and the huge capacity of each drive will compensate for the remaining price gap by reducing the number of devices.

SSD hard drives have a secret weapon in the secondary market: Due to the extra bandwidth in the entire storage structure, making deduplication and compression feasible, it can effectively expand capacity by 5x to 10x. This reduces the cost of the QLC-Flash solution at a price per gigabyte, which is lower than the HDD hard disk.

In the end, perhaps in just three to four years, flash and SSD drives will become the main storage products in the data center. In addition to conservative and obstinate users, all HDD hard drives will be eliminated. The following will discuss in depth how SSD drives will dominate data center storage.

(1) System Performance

In the 37 years after the introduction of the x86 architecture, the speed of CPU development has followed Moore's Law and has doubled every few years. During this period, the random access speed of the HDD hard disk only increased by 3 times. Storage arrays bring some parallel access across stripes of data, but cannot improve CPU performance.

Followed by SSD hard drives, where a single drive is even faster than a large HDD storage array. This leads people to rethink the storage system, which concludes to reduce I/O demand, which means that people need fewer servers to handle the same workload. For example, NVMe SSDs are so fast that they can support in-memory databases that run up to 100 times faster than traditional HDD hard disk systems.

People have already seen the results of this new perspective. Traditional storage array sales are declining, and vendors are replacing SSD drives with enterprise-class SAS drives used in servers. To be fair, fast HDD drives are an impending demise.

(2) Pricing of Flash Memory Chips

The application of 3D NAND is not smooth, which has led to a decline in its price. 3D NAND is a solid technology because its production problems have been solved. Its price began to decline again, but the price gap with HDD hard drives is still roughly three to one.

This gap provides some solutions for HDD hard disk manufacturers, but storage tiering also affects the total cost of ownership of the entire cluster, so even large-capacity storage will consider SSDs.

In 2018, there will be more new flash memory foundries in production. Combining the development of 3D NAND, chip stacking and QLC technology, they will produce a new generation of large-capacity main read memory, which will replace the secondary storage space and become a lower cost and more compact choice than any HDD hard disk solution. Also, for any given capacity, fewer storage devices will be needed.

(3) Storage Layering

As the storage model evolves from a large number of parallel HDD hard disks to fast SSD hard disks, the concept of tiered storage between fast primary storage and cached secondary storage has started. HDD hard disks can meet the needs of slow mass storage, but the rapid growth of SSD hard disk capacity and the emergence of low-cost flash memory such as 3D NAND and QLC cells mark the transition to flash-based secondary storage.

In addition, the added compression and deduplication capabilities help increase the bandwidth of SSD drives, and in most use cases, the effective capacity of secondary storage can be expanded by 5x to 10x. Deduplication and compression are used for fast primary storage to save transmission costs and bandwidth for secondary network storage, thereby significantly reducing the purchase cost of secondary storage.

(4) New storage software

SSD drives have performance advantages, and now people are seeing startups having creative ideas for “mining” sub-data layers. This is an example of a software-defined storage method that will attract people to link data services together to achieve results. When data processing passes through the chain, the low access latency of the SSD hard disk becomes important.

Object storage is evolving toward the SDS model, and people can already see excellent performance in the current object storage using SSDs. Experts predict that files and block I/O will quickly become the protocol for accessing the underlying object storage, enabling a unified and simpler storage model.

(5) Ethernet NVMe

With the rapid improvement of SSD hard disk performance, the bandwidth of the cluster structure also increases. In 2010, 1 Gigabit Ethernet was considered hot. Today, the 400GbE backbone network is being introduced to the market. More importantly, Remote Direct Data Access (RDMA) support is now common. RDMA releases a large amount of CPU time to move data.

As the more efficient protocol NVMe replaced it, the SCSI protocol is ending its 30-year storage advantage. SSDs using this protocol can realize millions of IOPS for big data analytics and other applications.

People now see tailored NVMe running on Ethernet. This will allow SSD drives to connect directly to the cluster structure, adding new speeds and connectivity to hyper-converged infrastructure.

(6) New form factors

Unlike HDD hard disks, SSD hard disks are not limited by the HDD hard disk size. There are currently 32TB 2.5-inch SSDs. In 2018, vendors will introduce 64TB SSDs with the same dimensions. Until the introduction of the HAMR technology product from 2019 to 2020, HDD hard disk drives have been at the forefront. Even so, the size of the large-capacity hard disk has always been 3.5 inches without any breakthrough.

This means that servers and devices that use SSD drives can get more capacity at the same size. For example, a 2U server can now have 24 SSDs (768 TB) installed. In contrast, the same server can only have 12 HDDs and has a capacity of about 180 TB.

In addition, the M2.0 SSD has a more compact size, resulting in a new device with a 32 TB raw capacity "ruler" hard disk. If people compare it with traditional arrays or today's server-based nodes, it can be seen that significant equipment costs and space are saved. If you add deduplication and compression, the capacity of a 1U storage device may even reach 5PB.

Disposable E-cig

More stable performance

Disposable ecig have a completely enclosed design, reducing the need for charging and replacing cartridges. The no-charge design also reduces the occurrence of faults. It is understood that with rechargeable e-cigarettes, each cartridge needs to be charged at least once and the battery efficiency is extremely low, while the design of disposable ecig can solve this problem very well.

Disposable ecig are very easy to carry on daily trips, no need for any operation in the use of airflow induction, direct suction can be out of smoke, meaning that you just need to suck directly on the hand can be the main feature of the small cigarette is practical portability is the main, its appearance is very small, no longer like the previous big smoke box mechanical rod nowhere to put.


The market outlook for disposable ecig

There has been a controversial topic regarding the use of disposable ecig. According to scientific studies, the long-term use of e-cigarettes can cause harm to the human body, no less than tobacco. Therefore, it is important not to overuse either type of e-cigarette - after all, smoking is bad for your health. Specifically, the PMTA requires that any new tobacco product be approved by the FDA for legal promotion and that the FDA consider all aspects of the product to be beneficial to public health. Our products can always be sold under the supervision of the PMTA at cost and are prohibited for sale to minors. We have strict guarantees regarding the quality of our products.



Disposable E-Cig,Disposable Vape Pen Oem,Best Disposable Vapes,One-Time Use Vape

Shenzhen MASON VAP Technology Co., Ltd. , https://www.cbdvapefactory.com