In almost every industry, traditional centralised data acquisition systems are being replaced with a more distributed network of measurement devices. This trend has recently been combined with the evolution of computing into the cloud that will continue to change not only how data is acquired, but also how it is stored, accessed and analysed.
While these trends have been fuelled in part by Moore’s Law and increasing computer power, a bulk of the shift has been caused by trends in the consumer electronics industry changing how we think of and interact with data. The benefits of these new architectures are numerous, including reduced capital, installation and maintenance costs; more powerful analytics; and the ability to access data from anywhere.
Historically, data acquisition systems were large and fragile. They needed to be separated from the harsh environments that could be present around a device under test (DUT), or were just too large to be placed around a device. This drove the centralised architecture of data acquisition systems that has been traditionally used.
In a centralised architecture, data acquisition equipment is stored in a central rack or control room, where there is ample space and it can be protected from the test. Sensor cables are then run, sometimes hundreds of metres, from this central location to sensors and actuators throughout the test fixture.
As applications become more complex, these home-run sensor wiring approaches become more difficult and costly to implement. The cost of running sensor cable can often be the single largest line item for installing new data acquisition systems once labour and capital costs are included.
The alternative approach to the traditional centralised data acquisition system is to fragment and distribute it around the application and run a single, inexpensive network cable for data transfer back to the server or control room. For example, in a wind turbine, any wire running from the blades into the central housing must first pass through a slip ring, allowing the blades to spin freely. The more wires that run out to the blades, the more complex the slip ring system required, exponentially increasing points of failure and system cost.
These distributed systems break apart the data acquisition system into smaller subsystems that are placed around the DUT, often in the test environment and as close to the measurement sensor as possible. They interact with the DUT locally and receive commands from and send data for logging back to a central server where the test operator is located. Additionally, computing can also be distributed close to the DUT for smart data reduction or localised control algorithms without the need to flood the network with data or commands.
Advantages of a distributed architecture
This architecture offers several advantages over a centralised system. By breaking the large centralised system into a modular, distributed system, smaller and cheaper subsystems are created that can more easily be maintained and replaced should one fail. A modular system is also much more flexible, as nodes can simply be added onto the network or swapped out if measurement needs change. This ease and low cost of repair means more uptime and higher reliability for the measurement system.
In contrast, a centralised system would cause concerns about the need to replace, reinstall and rewire expensive capital expenditure equipment if the requirements of the measurement system were to change.
A distributed architecture also reduces wiring cost by running a single communication cable to the distributed subsystems rather than laying possibly hundreds of sensor wires throughout a test cell. This reduction in cabling can lower costs and, more importantly, increase measurement accuracy because the shorter sensor wires to the distributed systems are less prone to noise, interference and signal loss.
A sensor wire acts as an antenna as it travels to the data acquisition system, picking up electrical interference present in the room from fluorescent lights, motors, and other seemingly benign sources. This interference can be combated with techniques like shielded cabling and twisted pairs wires, but these only serve to increase the cost of the cabling used.
As an example of the cost of sensor wiring, in aerospace structural test cells like those used at Boeing, Airbus and Embraer, wiring and cabling is often upwards of 25% of the total test cell hardware and software budget. By running standard Ethernet cable instead of sensor wire these large costs can easily be cut in half.
Finally, a distributed system can help offload processing from a main central computer. Many distributed data acquisition systems also have onboard intelligence that can be used to run analysis or reduce data to key values before uploading it to the central system.
This architecture allows the creation of task specific nodes in a system, with some of the analytics being done on the DAQ device, and user interface (UI) being done by a separate computer. This means the central computer can be substantially cheaper and faster than in a centralised architecture because a lot of its processing has been offloaded and it can focus solely on UI and data storage.
Driven by smaller, cheaper and more powerful processing
Until recently the benefits of implementing a distributed system were outweighed by the high cost of small and powerful enough hardware to embed through a test fixture. Over the past decade, however, the cost and size of processing power has driven down the price of distributed data acquisition hardware and fuelled adoption of this more efficient and flexible architecture.
As processors and analog-to-digital (ADC) converters become smaller, cheaper and more capable, they can more easily be embedded into small subsystems. Data acquisition systems no longer need the large amount of space that can only be had in a large centralised system but can now be placed in packages small enough to distribute around a test. This allows the data acquisition system to exploit the inherent advantages of a distributed architecture and reduced sensor wiring.
Consumer trends adding fuel to the fire
Advancements in processing, ADC power and size have been greatly accelerated in the past five years by the explosion of embedded consumer devices. Triggered by smartphones in 2007, embedded processors are now prevalent in most consumer products, from thermostats to refrigerators.
This massive increase in deployment has driven semiconductor manufacturers to further optimise their products for deployment in small, embedded systems. This same technological advancement can then be leveraged by data acquisition vendors who use common off-the-shelf parts to build more capable and cost effective distributed products.
Additionally, this growth of embedded consumer devices is also changing our expectations about how we interact with electronic devices. Home computing tasks that used to be relegated to a single device – the family computer – are now distributed to different task-optimised products. Internet browsing is done on a tablet, pictures are stored on media servers or in the cloud, and videos and movies are watched on Internet capable TVs. By distributing the computing power, we have created task-specific products that are more efficient from both a productivity and cost perspective.
This trend is not staying at home either. As noted by Adam Richardson in Innovation X, business customer expectations are largely driven by the sum of a person’s experiences, including those in the consumer world. As engineers, scientists and technicians become more accustomed to interacting with this more distributed style of computing, they will begin to expect it more and more in the lab and data acquisition companies are now, more than ever, capable of delivering on that expectation.
To the cloud
Recently, these trends have coupled with a new trend in both business and consumer technology that is beginning to take distributed data acquisition systems even further: cloud computing.
By placing saved data in the cloud, whether on an internal and private cloud, or on a public one like Microsoft’s Azure, three advantages are gained: there is near infinite processing power, near infinite storage and the data can be accessed from anywhere.
One of the key features of cloud computing is that it abstracts the idea of individual processors and thus can be seen by the user as a computer with infinite processor cores, capable of running analysis that could never be done on a single computer.
This allows further optimisation of a distributed system by placing processing intensive tasks in the cloud that would never be able to run on a distributed DAQ node nor on the central computer. Analysis that formerly locked up a system for hours or days can now be offloaded, leaving the computer free to continue collecting data or perform less demanding calculations.
The main advantage of cloud computing, however, is the ability to store limitless amounts of data and then be able to access that data from anywhere. Rather than being relegated to only accessing data on the computer it was acquired on, one can envision a system where the QR code of a product subassembly can be scanned with a smartphone, which can instantly display its entire test history. Cloud computing has the potential to completely change the entire workflow of data acquisition to a much more efficient model.
For more information contact National Instruments, 0800 203 199, [email protected], http://southafrica.ni.com
© Technews Publishing (Pty) Ltd | All Rights Reserved