Can Tabula and Tier Logic be successful?
Dr. Olivier Coudert
Can Tabula and Tier Logic be successful?
Dr. Olivier Coudert
The dominant factor in classical FPGA architecture is the interconnect: most of the die area is taken by the wires and the interconnect switches and muxes. If you can somehow reduce the area dedicated to interconnect, you can augment the logic density and lessen the cost of the device. Tabula and Tier Logic pitch a 3D architecture to address the interconnect bottleneck, albeit in very different flavors.
Tabula innovative design is based on its ability to reconfigure itself, up to 8 times with a clock running at 1.6GHz. At each cycle a cell can change its functionality, its latch configuration, and its interconnect. The time-multiplexing increases the amount of logic that can be fit on the same area. It is like having 8 layers (or “folds”) of cells stacked on top of each other along a time axis, with very short connection between cells at the same (x,y) coordinate but in two adjacent folds. At each cycle one jumps to the next fold and feeds the new configured logic with the results of the previous fold. Tabula claims they increase the logic density by 2.5x compared to classical FPGA architectures.
Tier Logic’s design idea is to place the SRAM cells that configure the interconnect muxes on top of the routing layers, instead of having them distributed throughout the logic die aera. Doing so leaves more room for logic cells, increasing the cell density by about 50% according to the company. The design flow will not throw anybody off: it uses Mentor’s Precision for synthesis, and is followed by TierLogic’s mapping and P&R.
A big plus touted by Tier Logic is the ability of moving painlessly from their device to an ASIC. Simply replace the interconnect configuration SRAM cells at the top with metal, and voila, you obtain an ASIC with no change in timing. This is a simple, predictable process: it takes about 4 weeks to go from the SRAM configuration to a top-layer mask, and you do not need to go through a timing closure flow again, which means a non-recurring engineering cost of about $50k. This is a real bargain when you consider that moving from FPGA to ASIC usually requires a redesign that can take as long as 9 months.
So who of Tabula and Tier Logic is best positioned to challenge the duopoly Xilinx/Altera?
Tabula made it clear that they are aiming at the high-end of the FPGA market. There are a number of FPGA startups that targeted the same niche, and none survived. One reason is that it is easy for Xilinx and Altera to increase the size of their device, by simply moving to the next technology node. Tabula’s design is innovative and pushes the limits, but how far is too far? It is unclear whether the company can deliver the design tools to match their device’s challenges –they went through a complete reset a few years ago, replacing the whole software team. Verifying a device that can reconfigure itself 8 times in a loop may be another challenging problem. Increased density is obtained by continuous reconfiguration, which means extra power consumption: is it still an acceptable tradeoff? Last but not least, with 100+ people in the US, it is a well-known fact in the Silicon Valley that Tabula burns cash fast, and their funding of $106 millions so far is about to come short.
Tier Logic’s FPGA can reduce the cost of the device for the same density. But their compelling value proposition is really their FPGA to ASIC translation. This is what Altera’s HardCopy was supposed to be, a seamless and risk-free migration from FPGA to ASIC. For anybody that wants to design an application and then migrate to a low/medium volume ASIC production, this could be the most cost efficient solution. I do not know the inside story regarding the financial aspect, but their business proposal looks more solid.
So who do you think has a chance here? Let’s meet again in 3-4 quarters and see how the two companies are doing.
Dr. Olivier Coudert has 20 years experience in software architecture and EDA product development, including 10 years in research. He received his PhD in Computer Sciences from Ecole nationale supérieure des Télécommunications, Paris , France , in 1991. He has published 50+ papers and book chapters, and he holds several patents on combinatorial optimization and physical synthesis. He is a recognized expert in the fields of formal verification, logic synthesis, low power, and physical synthesis. He led the development of several EDA products, including 3 from scratch in a startup environment. You can follow Olivier on Twitter, meet him on LinkedIn, or read his blog.