Computer/Embedded Technology


InfiniBand - ready to meet its markets?

30 November 2005 Computer/Embedded Technology

After its first spirited appearance in 2000, InfiniBand technology's experience in the marketplace has been chequered. Its predecessor, the PCI bus, found a new lease of life, and other competitors crept into the niches that InfiniBand had targeted.

But things change in the technology market, and the extraordinary features unique to InfiniBand are becoming apparent. Driven by the development efforts of companies like SBS Technologies, high performance computing (HPC) applications are taking advantage of InfiniBand, particularly in vital medical and military applications. With storage systems and server blade computing also increasing in significance, InfiniBand architecture is ready to begin a powerful new chapter.

Overview: InfiniBand in the bus' driving seat?

It is a familiar cycle. A core technology appears, which makes possible previously unimaginable processes by revolutionising computing speed and power. Then within moments, it seems, a glut of applications and systems follows that soaks up nearly all the new power, bandwidth or capacity. Just months after its release, that core technology requires expanding, improving or total reworking if it is to continue to support the applications it has spawned.

Which makes the longevity of the PCI bus even more remarkable. The mechanism which connects all the internal computer components to the CPU and main memory, the bus is the vital information circulatory system of the computer and has remained largely unchanged since its introduction by Intel in the early '90s. Its remarkable dominance is, in part, due to its power to dictate the design of higher level components, and PCI has scaled up only once to PCI-X (a bandwidth of up to 8,5 Gbps compared to PCI's 133 Mbps). Today, however, the combined forces of the Internet, increasingly server-intensive computing and a far less predictable data workload are forcing the bus to a stop.

InfiniBand is part of an industry-wide response to the inevitable obsolescence of PCI, in all its variant forms. Its development is controlled by the InfiniBand Trade Association (IBTA), which includes the major server vendors and several other interested parties. Ultimately, InfiniBand has become both an I/O architecture and a specification for the transmission of data between processors and I/O devices. Instead of sending data in parallel, which is what PCI does, InfiniBand sends data in serial and can carry multiple channels of data at the same time. The principles of InfiniBand are similar to those of channel-based mainframe computer systems, in that InfiniBand channels are created by attaching host channel adapters (HCAs) and target channel adapters (TCAs) through InfiniBand switches. HCAs are I/O engines located within a server, while TCAs enable remote storage and network connectivity into the InfiniBand interconnect fabric.

The objective is to provide a logical successor to the shared PCI bus, and although initially focused on I/O, the InfiniBand spec proved to be an ideal interconnect fabric for data centres when run over copper or fibre links. Plus, the structure of a point-to-point high-speed switch fabric also ensures inherently high quality of service (QoS), fault tolerance and scalability. In conjunction with some products, however, it becomes possible to integrate with, rather than eliminate, the newest version of PCI, PCIExpress. Actively driving this direction, SBS Technologies' IB4XPCIEXP-2 HCA combines the power of 4X InfiniBand with the high spec PCI Express computer interface to provide tremendous performance improvements. SBS also provides the EIS-4024-1UA switch, a high performance InfiniBand 24-port 4x copper cable switch, which includes the circuitry to detect and power the companies' IB4X-OMC device, itself used to convert the copper media interface to enable the use of fibre.

History: the drivers behind InfiniBand

If pressure on the parallel bus was the core technological driver behind the development of InfiniBand, it was a pressure itself driven by profound changes in the overall management of data. With the emergence of personal computers, information management changed rapidly from a model of mainframes and terminals to one of PCs sharing intelligence with racks of powerful servers. The combined PC-server has evolved into increasingly distributed and specialised server implementations, where applications run across a range of systems. The need is for I/O interconnect technologies which do not suffer from bottlenecks, get overloaded or provide inconsistent data speeds across different areas.

Add to this the impact of the Internet: from a highly PC-oriented world, the demand for Internet connectivity has created a world where more than 80% of computing is now done over networks. Subsequent investment in communications infrastructure has opened the door to constant, high-volume data transfers on a global scale, with the quantity, complexity and diversity of data increasing all the time. The result is a highly unpredictable data workload influenced by multiple external factors - making scalability, reliability and performance more elusive and more important than ever. It is in response to these demands that InfiniBand - and its competitors - have emerged.

The InfiniBand architecture - what it is, how it works

InfiniBand is an architecture, that is, a set of specifications incorporating all the layers and mechanisms by which an interconnect fabric can be implemented. IBTA's InfiniBand architecture is divided into multiple layers where each layer operates independently of one another. The lowest layer, the physical one, defines electrical and mechanical characteristics for the system, including cables and receptacles for fibre and copper media. The next link layer encompasses packet layout, point-to-point link operations, and switching within a local subnet, while the network layer handles routing of packets from one subnet to another. The transport layer is responsible for in-order packet delivery, partitioning, channel multiplexing and transport services.

With transport rates of up to 30 Gbps already in place, and the announcement of future bandwidths in forthcoming specs of up to 120 Gbps, the InfiniBand architecture is a high performance interconnect technology that can sustain current and future bandwidth requirements. It not only provides high speed, but can intelligently offload the computer's processing system from complex overhead and housekeeping tasks, making it highly scalable and efficient, both within and between boxes. So why did it not just take over the world?

It is tough at the top

InfiniBand was not the only response to the creakings of parallel bus architectures. The warhorse of network computing, Ethernet, reinvented itself as a 10 Gbps then a 100 Gbps version - although even at that rate it is only just able to satisfy current high-end bandwidth needs and will soon be outgrown by the demands of storage applications. Meanwhile, a rival consortium developed HyperTransport, a high-speed, high-performance point-to-point link for interconnecting integrated circuits on a motherboard; and another group is behind RapidIO, conceptually most similar to InfiniBand.

Ultimately, marketing considerations became critical, and the RapidIO consortium chose to focus on within-box interconnections, reducing the market for InfiniBand to between-box applications. In conjunction with this, major industry mover, Intel, chose to put its development efforts behind PCI Express, the switched serial fabric incarnation of the PCI and PCI-X buses. This was a loss to the IBTA effort, which was compounded by the absorption of another consortia's switched interconnect technology, StarFabric, for backplane and chassis-to-chassis applications, into PCI Express. This took advantage of StarFabric's network management benefits and led to the creation of PCI Express Advanced Switching. Even HyperTransport achieved a solid foothold in network processor core router designs.

With no single large market or application emerging for InfiniBand to occupy, it looked like the definitive interconnect fabric might have no home to go to. But as sometimes happens, the market changed, new demands arose and the fight became more of a cooperative effort.

HPC: emerging markets and applications

In recent years the demand for vast amounts of data storage and networked applications has raised the profile and importance of high performance computing (HPC), the branch of computer science that develops the processing algorithms and software running on supercomputers. HPC applications use hundreds of clustered servers, often running on open source operating systems such as Linux, and require optimum use of space and power. InfiniBand technology accesses memory directly with its remote direct memory access (RDMA) feature, which bypasses the CPU for information transfer, making it an ideal fit for HPC applications.

SBS Technologies is actively developing products that help open up the HPC market to InfiniBand. IB4X-LPCIEXP-2 is a 10 Gbps low-profile InfiniBand PCI-Express host channel adapter (HCA), engineered to drive the full performance of high-speed InfiniBand fabrics. Designed to provide the high throughput and low CPU utilisation required for 10 Gbps embedded data transport applications and high performance computing clustering (HPCC), the HCA is configured with 128 MBytes of DDR memory. InfiniBand's RDMA combined with the inherent characteristics of InfiniBand enables high performance data clustering, communications and storage traffic to be run over an InfiniBand fabric.

A rapidly growing use for HPC with InfiniBand is exploratory medicine, where magnetic resonance imaging (MRI) is becoming an increasingly sophisticated and cost-effective diagnostic technique. An MRI scanner operating within an InfiniBand-architected network can generate, digitise and process vast quantities of image data, and multiple high-speed data transfers free up the desktop PC so it is dealing only with the presentation of the processed image. This greatly enhances the speed with which medical professionals can view and interpret the scans.

An equally compelling home for InfiniBand has emerged in the military marketplace, where commercial off-the-shelf (COTS) solutions are becoming markedly preferred as the building blocks of avionics and other control systems. Even routine missions are now information-intensive, involving simultaneous data, audio and video, which easily justifies bandwidth sufficient to push existing technologies beyond their capabilities. InfiniBand, with its high bandwidth, low latency and lower cost, has huge potential for use in avionics, combining simple value for money with data integrity, scalability, fault tolerance and security. As with other HPC applications, its strength lies in its intelligent ability to offload the compute processing system from complex overhead and housekeeping tasks.

Future growth and applications

A combination of market changes, new demands and new possibilities has retrieved InfiniBand from its small between-box niche and reinstated it as a powerful and promising new interconnect fabric, now and for the foreseeable future. Perhaps most encouragingly, it is no longer competing with similar products (such as PCI-Express and RapidIO), but working with them so each complements the other's strengths. In future, for example, expect to see PCI-Express featuring strongly as the local interconnect, which, when combined with InfiniBand's RDMA capabilities, provides all the basic components necessary to design immensely powerful networking products for high performance computing, clustered database and storage applications.

A common theme to many new areas of InfiniBand's success is most clearly visible within the embedded computing marketplace, where the InfiniBand architecture has arguably the most to offer.

Such applications require scalable bandwidth which increases as nodes are added, and must be able to transfer data concurrently. This is a pretty good description of InfiniBand's key benefits. Compared with other switched fabric alternatives, InfiniBand is the only switched fabric suitable as an effective board-to-board, chassis-to-chassis and system-to-system interconnect. SBS Technologies' IB4X-PMC-2 HCA is the first 4x dual-port InfiniBand HCA to be provided on a PCI Mezzanine Card, and with data throughput of 10 Gbps and low CPU utilisation, it is ideal for data acquisition, high-performance computing clustering, storage, video capture and database applications.

High-performance storage also stands to gain from the InfiniBand architecture. Both network attached storage (NAS) and storage area network (SAN) technologies have established footholds individually, and in hybrid form. When bridged to InfiniBand clusters, a single gateway offers the same capability as an entire Fibre Channel in each server, as well as a better price/performance ratio - which reaches orders of magnitude when InfiniBand switches are compared to the equivalent Ethernet product.

New forms of server technology, notably blade server computing, are also welcoming InfiniBand. A server blade is a single circuit board populated with components such as processors, memory, and network connections that are usually found on multiple boards and are thus more cost-efficient, smaller and consume less power than traditional box-based servers. When InfiniBand is added to the mix, the server blade becomes not only more efficient, but also highly available, easy to manage, and with a scalable computing and I/O infrastructure which leads directly to lower cost-of-ownership.

Conclusion - a bright fabric future

InfiniBand's duration in a competitive marketplace and its consistent performance improvements, openness and ability to work with competing technologies, proves that a well-designed technology, however advanced, can be sufficiently influential to mould the direction of other technologies which seek to take advantage of its benefits. Its low cost and high performance, combined with rugged flexibility, make InfiniBand a powerful building block within a large and growing market. Its scalability and reliability mean it will remain influential far into the future, both singly and in combination with other complementary technologies.

For more information contact RedLinX, +27 (0)21 555 3866, [email protected]





Share this article:
Share via emailShare via LinkedInPrint this page

Further reading:

Generate waveforms at 10 GS/s
Vepac Electronics Computer/Embedded Technology
New flagship arbitrary waveform generator cards from Spectrum Instrumentation generate waveforms with 2,5 GHz bandwidth and 16-bit vertical resolution.

Read more...
Quad-port 10GBASE-T controller
Rugged Interconnect Technologies Computer/Embedded Technology
he SN4-DJEMBE, available from Rugged Interconnect, is a networking adaptor card for CompactPCI Serial systems, equipped with four individual controllers for 10GBASE-T.

Read more...
HPE policy management platform
Computer/Embedded Technology
Duxbury Networking has announced the availability of the HPE Aruba ClearPass policy management platform, that enables business and personal devices to connect to an organisational level, in compliance with corporate security policies.

Read more...
IoT gateways
Brandwagon Distribution Computer/Embedded Technology
IoT Gateways are hardware and software devices that are responsible for collecting data from connected devices, managing communication between devices and the cloud, and processing and analysing the data before sending it to the cloud for further analysis.

Read more...
1.6T Ethernet IP solution to drive AI and hyperscale data centre chips
Computer/Embedded Technology
As artificial intelligence (AI) workloads continue to grow exponentially, and hyperscale data centres become the backbone of our digital infrastructure, the need for faster and more efficient communication technologies becomes imperative. 1.6T Ethernet will rapidly be replacing 400G and 800G Ethernet as the backbone of hyperscale data centres.

Read more...
Keeping it cool within the edge data centre
Computer/Embedded Technology
The creation of more data brings with it the corresponding need for more compute power and more data centres, which, in turn, can create unique challenges with regards to securing the environment and cooling the IT loads.

Read more...
NEX XON becomes Fortinet partner
NEC XON Computer/Embedded Technology
This designation demonstrates NEC XON’s ability to expertly deploy, operate, and maintain its own end-to-end security solutions, helping organisations to achieve digital acceleration.

Read more...
Online tool for data centre planning and design
Computer/Embedded Technology
Vertiv has unveiled a new tool, Vertiv Modular Designer Lite, designed to transform and simplify the configuration of prefabricated modular (PFM) data centres.

Read more...
Mission computer for HADES
Rugged Interconnect Technologies Computer/Embedded Technology
North Atlantic Industries’ latest product, the SIU34S, has been selected as the mission computer for the High Accuracy Detection and Exploitation System (HADES) program.

Read more...
14th Gen power to boost AI at the edge
Rugged Interconnect Technologies Computer/Embedded Technology
ADLINK’s inclusion of Intel’s 14th generation Core processors into its latest embedded boards and fanless computers is set to boost the AI and graphics capabilities.

Read more...