What Does Application Security Mean for Embedded Devices?

December 4, 2018 Bill Graham

 embeddedsecurity

The term application security is a popular one in the software community. When people refer to application security, they typically talk about enterprise applications, the applications that enable banks, e-commerce, businesses in general to operate. The term is not as common in the embedded software industry, this is the software that controls power plants, factories, air planes, cars, thermostats and in general facilitates our day-to-day lives. This makes people think that application security and the principles, techniques and tools used do not apply.  This could not be farther from the truth. Moreover, benchmarks such as the OWASP Top 10, tools such as static analysis, processes such as DevSecOps, among others, are equally applicable in embedded systems and applications, with a bit of translation where needed.

So, what is the difference between an application and an embedded device? It really depends on your point of view.

"Application" Means Different Things

So, what is an application in an enterprise system? It is typically a piece of functionality that implements business logic to deliver value to an end-user, this end-user could be another system, or a person. The application sits on top of layers of abstraction, which could be a web-server, Node.js, or another platform that uses other applications such as databases, or connects to other applications.

An embedded system can vary from small systems such as an engine control unit in a car to large systems such as a telecom switch. Developers building small systems that perform a dedicated function, likely don’t view what they’re doing as “application development.” Similarly, a developer working on high level applications in a telecom switch, might not view what they do as embedded development. Let’s consider some of the different points of view of application development with respect to embedded development:

  • The device is the application: For dedicated systems that literally do the same thing day in and day out, their reason for being is the functionality of the device. The device provides the application and is only to be modified for security and quality updates.
  • We build platforms not applications: In large scale embedded systems, such as a telecommunications switch, there are often separate teams doing different layers of the architecture. The low level “platform” teams deal with hardware interfaces, real-time operating systems, device drivers, etc. They often seem themselves in a different world from other teams building the software that uses the platform.
  • Applications are for desktops and servers: Some embedded developers may not consider any part of what they do to be application development, instead, being solely the domain of desktop computers and back end servers. Although this point of view exists, these same developers must still consider security important to their jobs as it is for any software developer.

What is an Application?

Wikipedia defines application software as “is computer software designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user.” This seems pretty straightforward although it’s important to realize that the “user” may not be a human but another system. It’s easy to see that almost every embedded device is providing some kind of functionality that is beneficial to an end user, human or otherwise.

Why is it important to bring up the importance of application security to embedded developers? First and foremost, it is to highlight the importance of security for any devices that are connected to other device or provide some kind of user or system input. Developers may assume, for example, that malicious use of their device is unlikely because it’s destined for use in a closed system, tucked away, out of sight. A clear example of this is the Stuxnet worm which was designed to attack industrial control systems thought be “air gapped” from the outside world. Although this attack was quite sophisticated, it’s unlikely the designers of the industrial control system considered the scenario of malicious attack and from within a private network as well.

So, What Parts of Application Security Might Embedded Developers be Missing?

By not considering application security nor taking security seriously as a top-level requirement and risk in development, embedded developers continue to open themselves and their companies to risk and product liability. It’s understandable because many of these devices are mass produced on tight budgets. However, the cost of not securing a device eventually comes back to bite the manufacturer. Consider some of things embedded developers might be missing by ignoring the principles, techniques and tools used in other domains, such as those by OWASP and CERT/CWE:

  • Not designing security into their device: Securing any software after it’s been built is way more difficult and expensive than doing early in inception and design. Not only that, embedded developers are unlikely to consider all the attack surfaces possible on their device and even the possibility of malicious use (although recent media coverage may be changing this view.) I’ve covered the “secure by design topic in the past, see this blog series starting here.
  • Not testing for security: If security isn’t a high priority then it’s also likely a device has not been through security testing. This doesn’t just mean external penetration testing, for example. Even down to the unit and code level, it’s unlikely code has been reviewed for vulnerabilities. Static analysis tools, for example, are useful for discovering latent bugs and security issues in code, even after development.
  • Not using processes and tools to improve security: A critical part of designing in improving security is the use of proven processes, tools, techniques, and coding standards. Incorporating best practices into the development cycle helps reduce the overall cost of security and spreads the effort over the entire project.
  • Not fixing security issues: It seems obvious to many developers that if a security vulnerabilities is found, they’ll just fix it and patch the product. Unfortunately, this is far from what’s practiced in the industry. In fact, over 25% of critical security vulnerabilities remain unpatched, 290 days after being discovered! Embedded devices can be particularly difficult to patch and even then, it’s difficult to notify users and get patches disseminated effectively.

Embedded developers are advised to heed the teaching and experiences from the application and the enterprise world. In fact, I’ll look at the OWASP top 10 vulnerability list and how it applies to embedded development in a future post.

Summary

Despite protests to the contrary, in most cases, embedded developers are building software applications. Although discussion of application security don’t include embedded devices nor their unique constraints, many of the principles are applicable. Security is often an afterthought and this becomes an expensive proposition regardless of the effort put in post-development. Either customers of attackers will find the security vulnerabilities if developers don’t. Considering principles and experiences from application security is just as important for embedded developers. 

Previous Article
Using CodeSonar and SARIF with Microsoft Visual Studio Code
Using CodeSonar and SARIF with Microsoft Visual Studio Code

Here at GrammaTech, we get compliments on how well CodeSonar and the hub, specifically, handles ...

Next Article
CodeSonar’s Integration with Microsoft Visual Studio
CodeSonar’s Integration with Microsoft Visual Studio

Microsoft Visual Studio continues, at 21 years old, to be a dominant integrated development envi...