Home / Check it Out / Understanding a Key Trend in Embedded System Design

Understanding a Key Trend in Embedded System Design

Dear Desktop Engineering Reader:

Embedded control and monitoring systems design takes a bunch of forms depending upon your industry. But regardless if you’re working in, say, energy, industrial control, transportation, or life sciences, pressures from above to minimize design time (translation: deploy then you can eat) and ever-increasing requirements have changed how you approach new embedded design projects. And while technologies like multi-core CPUs have helped you achieve designs with faster clock speeds and throughput, they’ve really only bought you some time. Today’s Check It Out offering looks at what you can do with the time you have.

National Instruments

The link over there takes you to a web page in the National Instruments Developer’s Zone. Essentially, this page serves an introduction to an emerging trend in embedded design: Combining multiple processing elements in heterogeneous architectures to develop high-performance embedded systems. In other words, you can build a control and monitoring system using a mix of compute architectures and multiple discrete processing elements and achieve a better, more optimized balance of throughput, latency, flexibility, and cost. As a bonus, you can achieve this quickly and with smaller design teams, according to NI.

If you’re acquainted with what GPUs (graphics processing units) have done for simultaneous CAE and design, what’s going on will sound somewhat familiar to you. Essentially, you’re combining platforms and components then using them to execute the various I/O and compute jobs that they do best. This also means that you are simultaneously offloading jobs from components not optimized for the assignment to components that are.

The example cited couples a CPU, an FPGA (field-programmable gate array), and some I/O with an off-the-shelf architecture. A key is the FPGA. These chips are well suited to handle parallel computations such as signal processing operations on a large number of parallel data channels. They also implement computations directly in hardware, which means a low-latency path for jobs like triggering and high-speed, closed-loop control. But the real thing is that you code them with LabVIEW to do what you want them to do, so you customize your off-the-shelf hardware through the software.

FPGAs offer other attributes, such as easy upgradability compared to fixed logic, that the discussion looks at but we’ll skip in the short space we have here. The takeaway is that this combination means you do not have to choose between the powers of your CPU or FPGA. You could, for example, use one FPGA to handle a low-latency parallel task rather than using multiple CPUs to do the same thing. The CPU can then handle the OS, the busses, and other stuff it does well. The net effect is that you have a higher performing, more flexible, and more capable embedded system. And it’s cost effective because you’re not custom designing hardware to do the job but using software to enable the hardware to do your customized job. And that software can be developed by the dude or dudette with the domain expertise using LabVIEW instead of a team of squabbling programmers.

The web page buttresses its broader discussion with a very brief and interesting case study of a transportation solutions company. Engineers here built a control and monitoring system capable of simulating signals to imitate a real-world train, measuring test data, and logging information. The system used off-the-shelf hardware and a combination of an FPGA and embedded processors to provide high-speed control, data acquisition, and data analysis in a single system.

National Instruments, of course, provides graphical system tools such as LabVIEW as well as a variety of reconfigurable I/O hardware for engineers and scientists developing control and monitoring systems including its newly announced cRIO-9068, a high-performance software-designed controller. But the page is a soft sell. It’s about delivering innovative high-performance embedded systems without building custom hardware and without mastering multiple design tools. Check it out by hitting the link over there.

Thanks, Pal. – Lockwood

Anthony J. Lockwood
Editor at Large, Desktop Engineering

Understanding a Key Trend in Embedded System Design

About Anthony J. Lockwood

Anthony J. Lockwood is Desktop Engineering's Editor-at-Large. Contact him via de-editors@deskeng.com.
Scroll To Top