Lightning is a joint project between Facebook and Wiwynn which has been presented during the latest Open Compute Summit. The idea behind lightning is to apply the JBOD concept coming from the hard drive space to the flash storage solution. Lightning is a JBOF (Just a Bunch Of Flash) solution.
Running lightning full flash storage with OCP recertified servers
The project is based on the well known knox approach and requires 2 Open U with a standard Open Rack v2. It is backward compatible to Open Rack v1, which makes it a perfect candidate to our recertified server program.
We had the opportunity last week to visit the OCP lab hosted on Facebook campus with an “old” but still healthy Winterfell server ( dual Xeon 2680v2, 64 GB of RAM and 2TB drives) which went through Sesame recertification process, with the intend to validate the compatibility of lightning solution with it.
As being a flash storage option, lightning chassis are connected to the server using PCIe protocol. One of the benefit that this protocol has is an extremely low latency and high bandwidth. As to extend the maximum length of the bus, a retimer card is added to each server, and the physical support used to interconnect lightning chassis to the server is based on standard SAS cables as to lower production cost.
Each lightning chassis can support up to 4 nodes connections using pure PCIe. In that case each connection is performed using up to x8 lanes per server. The maximum width per server is x16 lanes which is providing a total bandwidth of 32 GB/s, far good enough compared to the actual networking bandwidth even by using 100 Gbps connexion.
Each lightning slot ( 15 available per tray or 30 into 2OU) can host 2 M2 style nVMe drives or 1 standard 2.5 inches drive. The total capacity we tested was a 30 TB configuration based on x15 Intel DC 3600 SSD drive.
After having installed latest CentOS on the server, and nVMe tools, we have been able to immediatly detect the drives and use them.
In some way we were expecting that result, but a formal test was required before being able to integrate the solution to our recertified server program.
We reached the level of performances we wanted to, and they are just blazing more than 12 GB/s on a 30 TB configuration.
Lightning has plenty to offer, and one of the option is to balance the investment between super expensive drives with high TBW or building up an entry level or mid range full flash performance storage which could be the sum of low cost nVMe M2 drives generating an high TBW solution when properly balanced.
Our recertified Winterfell server by working straight forward allows in that case to lower the cost of acquisition of such storage configuration by rebalancing the investment to the core value of the flash storage and makes it a perfect match, to build up innovative ecological storage building block !
We would like to thank the OCP lab to have setup this testing phase in a quick way and providing us access to this state of the art JBOF, as well as our Wiwynn contacts.