Tag Archives: security

Pondering security in an Internet of things era

arduino lockIt hasn’t taken long for the question of security to rise to the top the list of concerns about the Internet of Things. If you are going to open up remote control interfaces for the things that assist our lives, you have to assume people will be motivated to abuse them. As cities get smarter, everything from parking meters to traffic lights are being instrumented with the ability to remotely control them. Manufacturing floors and power transmission equipment are likewise being instrumented. The opportunities for theft or sabotage are hard to deny. What would happen, for example, if a denial of service attack were launched against a city’s traffic controls or energy supply?

Privacy is a different, but parallel concern. When you consider that a personal medical record is worth more money on the black market than a person’s credit card information, you begin to realize the threat. The amount of personal insight that could be gleaned if everything you did could be monitored would be frightening.

The problem is that the Internet of Things greatly expands the attack surface that must be secured. Organizations often have a hard enough time simply preventing attacks on traditional infrastructure. Add in potentially thousands of remote points of attack, many of which may not be feasible to physically protect, and now you have a much more complex security equation.

The truth is that it won’t be possible to keep the Internet of Things completely secure, so we have to design systems that assume that anything can be compromised. There must be a zero trust model at all points of the system. We’ve learned from protecting the edges of our enterprises that the firewall approach of simply controlling the port of entry is insufficient. And we need to be able to quickly recognize when a breach has occurred and stop it before it can cause more damage.

There are of course multiple elements to securing the Internet of things, but here are four elements to consider:

1) “Things” physical device security – in most scenarios the connected devices can be the weakest link in the security chain. Even a simple sensor that you may not instinctively worry about can turn into an attack point. Hackers can use these attack points to deduce private information (like listening in on a smart energy meter to deduce a home occupant is away), or even to infiltrate entire networks. Physical device security starts with making them tamper-resistant. For example, devices can be designed to become disabled (and data and key wiped) when their cases are opened. Software threats can be minimized with secure booting techniques that can sense when software on the devices has been altered. Network threats can be contained by employing strong key management between devices and their connection points.

Since the number of connected things will be extraordinarily high, on boarding and bootstrapping security into each one can be daunting. Many hardware manufacturers are building “call home” technology into their products to facilitate this, establishing a secure handshake and key exchange. Some manufacturers are even using unique hardware-based signatures to facilitate secure key generation and reduce spoofing risk.

2) Data security – data has both security and privacy concerns, so it deserves its own special focus. For many connected things, local on-device caching is required. Data should always be encrypted, preferably on the device prior to transport, and not decrypted until it reaches it’s destination. Transport layer encryption is common, but if data is cached on either side of the transport without being encrypted, then there are still risks. It is also usually a good idea to insert security policies that can inspect data to ensure that it’s structure and content is what should be expected. This discourages many potential threats, including injection and overflow attacks.

3) Network security – beyond securing the transmission of data, the Internet of things needs to be sensitive to the fact that it is exposing data and control interfaces over a network. These interfaces need to be protected by bi-lateral authentication, and detailed authorization policies that constrain what can be done at each side of the connection. Since individual devices cannot always be physically accessed for management, remote management is a must, enabling new software to be pushed to devices, but this also opens up connections that must be secured. In addition, policies needs to be defined at the data layer to ensure that injection attacks are foiled. Virus and attack signature recognition is equally important. Denial of service type attacks also need to be defensed, which can be facilitated by monitoring for unusual network activity and providing adequate buffering and balancing between the network and back end systems.

4) Detecting and isolating breaches – despite the best efforts of any security infrastructure, it is impossible to completely eliminate breaches. This is where most security implementations fail. The key is to constantly monitor the environment down to the physical devices to be able to identify breaches when they occur. This requires the ability to recognize what a breach looks like. For the Internet of things, attacks can come in many flavors, including spoofing, hijacking, injection, viral, sniffing, and denial of service. Adequate real-time monitoring for these types of attacks is critical to a good security practice.

Once a breach or attack is detected, rapid isolation is the next most important step. Ideally, breached devices can be taken out of commission, and remotely wiped. Breached servers can be cut off from sensitive back end systems and shut down. The key is to be able to detect problems as quickly as possible and then immediately quarantine them.

Outside of these four security considerations, let me add two more that are specifically related to privacy. Since so much of the Internet of things is built around consumer devices, the privacy risks are high. Consumers are increasingly back lashing against the surveillance economy inherent in many social networking tools, and the Internet of things threatens to take that to the next level.

Opt in – Most consumers have no idea what information is being collected about them, even by the social tools they use every day. But when the devices you use become connected, the opportunities for abuse get even worse. Now there are many great reasons for your car and appliances and personal health monitors to be connected, but unless you know that your data is being collected, where the data is going, and how it is being used, you are effectively being secretly monitored. The manufacturers of these connected things need to provide consumers with a choice. There can be benefits to being monitored, like discounted costs or advanced services, but consumers must be given the opportunity to opt in for those benefits, and understand that they are giving up some personal liberties in the process.

Data anonymization – when data is collected, much of the time, the goal is not to get specific personal information about an individual user, but rather to understand trends and anomalies that can help improve and optimize downstream experiences. Given that, organizations who employ the Internet of things should strive to remove any personally identifying information as they conduct their data analysis. This practice will reduce the number of privacy exposures, while still providing many of the benefits of the data.

The Internet of things requires a different approach to security and privacy. Already the headlines are rolling in about the issues, so it’s time to get serious about getting ahead of the problem.

Tagged ,

Malware on Mobile Grew 163% in 2012

Darrell Etherington of TechCrunch posted a new stthreat and fraududy by NQ Mobile that reinforces the growing problem of security issues surrounding mobile. The report found that more than 32.8 million devices were infected in 2012. Over 25% of the infected devices are in China. It just isn’t realistic to assume that the mobile devices accessing your APIs are not infected…

Tagged ,

Protecting Against JSON-Bourne Attacks

Mobile has become the new #1 target for hackers and cyberattacks. As more consumers and businesses become more comfortable conducting business over mobile devices, this becomes a natural target for the baddies who want to steal personal information, or just disrupt business. And if you believe what the experts are saying, you need to be prepared that your mobile phone will eventually be hacked…  With the number of incidents of malicious code (particularly on Android) increasing by the day, it is vital that your organization is prepared.

That's JSON-bourne, not Jason Bourne...

That’s JSON-bourne, not Jason Bourne…

One of the main targets is not actually what is sitting on the phone, but instead the services that the app is accessing on the back end. These services present APIs that mobile apps invoke to get and put information into back end systems. In the mobile world, most of these APIs use a simple and concise data format called JSON to transmit this data and these requests. While most organizations protect these APIs using traditional firewall technologies, many are not doing enough to protect themselves from malicious content hidden in the JSON. According to IDC, “signature-based tools (antivirus, firewalls, and intrusion prevention) are only effective against 30–50% of current security threats.”

A similar issue arose in the height of SOA adoption, where protection against XML-bourne attacks became standard practice, but with the rise of mobile and lighter weight RESTful services, organizations need to shift to make sure they are protecting themselves against new threats.

The problem is that it is fairly easy to inject malicious content, buried in JSON data, into a seemingly innocuous REST call. Unless the malicious content matches one of the signatures that your firewall is watching for, the content will get through to the server, where it can be made to automatically execute as server-side Javascript. Since this code is generally executed in a less-protected area, it often has access to sensitive back-end systems, where it can do more damage or compromise private information.

Luckily, there is a way to protect against these threats without relying solely on good programming practices. Some of the same security gateways that organizations use to protect Web services can be easily extended to protect JSON/REST services. The best of these (like WebSphere DataPower) can be delivered as secure hardware appliances that prevent unauthorized tampering and provide FIPS 140-2 Level 3 certified protection. These gateways work by inspecting the data payloads and finding and filtering out suspect JSON data (among other things), providing a much deeper level of protection than traditional firewalls alone.

When you take a look at the OWASP top 10 threats, many of these remain relevant in JSON-centric applications. For example Cross-Site Scripting (XSS) and Cross-Site Request Forgeries (CSRF) are still concerns. In addition, hackers can inject very large JSON documents that can cause massive slow-downs in the systems that process those messages. However, the biggest threat is script injection – one that is a bit more specific to how JSON is processed in Javascript, and one that enables direct execution of functions on infected servers.

With all of the focus and spending on Mobile security, organizations need to be considering this threat as much as they are the threats to what is resident on the phone itself. I don’t think this has sunk in for many organizations, yet. Is your organization ready?

Tagged , , ,

The ancient art of API Management

I had a good discussion with a company today who has been talking to one of our competitors about API Management. A member of their team asked me what advantages IBM’s offering has over competitors, particularly since the API Management offering was only just announced a few weeks ago.

I think it is a great question, so I want to address it here for everyone’s benefit. I think it really comes down to four key points:

  1. In actuality, IBM has been helping organizations publish services and APIs securely using DataPower for years. In fact, organizations like Pitney Bowes and Royal Caribbean just spoke at the Impact conference in Las Vegas about the success they are having in this area. What is new within the portfolio is the developer portal, API assembly, and business API Insight. The harder parts of API management (security, policy management, traffic management, etc.) have been available in DataPower for a long time.
  2. That said, there are some strong technological advantages in the IBM offering. First of all, the DataPower appliance at the core of the offering is by far the market leading security gateway. DataPower has roughly ten times the customer base of the nearest competitor, and is growing faster. The most security conscious organizations in the world use DataPower to protect the services and APIs that they publish externally. Within the API Management solution, IBM also has a unique ability to easily assemble and publish new APIs through a simple configuration interface. This allows organizations to take internal resources and publish them as secure APIs in a few clicks. Also, the solution employs big data analytics to provide a higher level of insight into API consumption. And perhaps most importantly, IBM brings a level of scale to API Management that none of the other vendors can match. By tapping into IBM developerWorks, IBM can offer access to millions of developers around the world.
  3. Beyond this, IBM’s vision in this space is much broader. Publishing business APIs requires the same level of infrastructure and rigor as other applications, and shares a lot of technology basis with initiatives like mobile computing.  IBM’s portfolio is particularly well-suited to the requirements that this creates. Experience with technologies where IBM already has a market leadership position, like service registry for lifecycle management, in-memory caching, mobile application support, and even network traffic shaping, all fit into this vision in the long term, and can quickly provide capabilities well beyond what smaller vendors are able to offer with limited development teams and budgets.
  4. IBM’s development team is several times larger than other API Management competitors, and the network of field experts within IBM and its partners is also many times larger. IBM’s reach around the globe is far beyond what a small startup can offer. And layered on top of that IBM has extensive expertise across industries and technology platforms that exceeds the capacity of smaller companies. If your organization views API Management as a critical strategy, as IBM does, then risk mitigation and scale should be top concerns.

Now none of this is to say that the other API Management vendors lack good technology. I actually really like several of the players in this space, and in fact I have some good friends that work in a couple of them. I just think that API Management is really at an early phase of its lifecycle, so choosing a vendor that understands the challenges and will also be there for the long haul is extremely important.

Tagged , ,

API Management: On Premise or Off Premise?

I have been having more and more customer conversations recently about API Management. Interestingly there are a range of discussions, from customers wanting to publish their own API & Mobile “Stores,” to customers simply wanting to open up key service interfaces over the Internet.

One thing that I continuously get asked is whether API Management functions should be housed on premise or off premise. In some ways, the decision of where to manage APIs is no different than any other on premise vs. off premise discussion. However, when security and system protection are paramount concerns, there is no question that those functions are best hosted on site, typically in a service gateway appliance within your DMZ. At the same time, there are distinct advantages to placing things like Developer Awareness and Support in the Cloud.

Ideally, you should look at each of the main functions of API Management you are deploying and determine where each one is optimal to run. Good API Management products will give you the flexibility to run things where they make the most sense.

Tagged , ,

The Mobile Economy

Mobile is officially hot this year. Seemingly every organization I talk to is developing some sort of mobile application to reach customers, partners, or employees. It seems that tablets have become a mainstream business tool. In fact, a survey found that most small and medium sized businesses plan to buy tens of thousands of dollars worth of tablets in 2012. This is an incredibly fast evolution, since even a year ago most CIOs wouldn’t allow tablets to even connect to their networks.

That bodes well for Apple, but it also opens up a whole new frontier for software providers. Here are some of the things organizations need:

  • Mobile device management – lots of companies have already invested in these tools that allow them to manage the devices deployed to their employees. This is very similar to what was used to manage laptops and other hardware, along with the software that resides on them.
  • Mobile application development – there is an ongoing debate here between native apps and HTML 5, but regardless of which side of the fence you choose, you need multi-platform development tools that take into account things like screen size and platform design (at a minimum). On the server side, development tools need to account for device profile and connection constraints, taking better advantage of techniques like caching and server-side data management.
  • Mobile messaging – while most mobile applications rely on HTTP today, it has a lot of downsides (verbose, slow, processor/battery intensive, unreliable, pull-only). HTML 5 improves things slightly with Websockets, but in large scale deployments even the overhead of the initial HTTP handshake can be prohibitive. And reliability, connection management, security, and recovery all need to be built around the outside of it. Direct messaging technology for mobile apps is a wide open opportunity.
  • Mobile security – with all these new apps connecting into organizations from all over the place, and often doing very sensitive things, security becomes a bigger problem than ever. Authenticating, authorizing, and maintaining secure channels, while also preventing DoS and spoofing, is another emerging requirement in mobile.
  • Mobile business tools – think about all the opportunities for the vendors who offer business applications and productivity tools to extend those capabilities out to mobile devices.

So all of this adds up to an incredible opportunity for software companies, and an even greater opportunity for the organizations they sell to. In a lot of ways, this is the Web all over again. Organizations need a mobile presence to survive, and the new channel opens up opportunities for new entrants. I’m personally optimistic that this will drive a whole new economic wave, like the Web did before it. Fasten your seat belts.

Tagged , ,

Protect your information: whether you like it or not!

I always find it interesting that organizations seem to take such a reactionary approach to data security. It seems that most companies fail to invest in deep data security until they’ve experienced enough serious breaches to shake them into doing something. I haven’t been able to figure out if it is an awareness issue or just denial. Not surprisingly, the technology and process investment needed to truly secure data is much lower than the cost of dealing with a breach.

The truth is that data security is a serious problem for everyone. A 2009 Ponemon Institute study found that 82% of organizations had experienced a data breach, and 94% had experienced data attacks in the past six months. I find this to be a startling number… Similar studies have found that the cost of a data breach is now over $200 per record, and since most of these breaches include anywhere from 5,000 records to 100,000, the impact can be extremely high. And yet, most companies rely on standard firewall defenses and database authorization as their sole means of protection.

Another interesting observation is that when people finally do start to invest in deeper data security, they seem to snap into a better awareness and invest quickly to do a better job of protecting a much broader set of data, even though their initiatives may just start with a smaller subset. I think what happens is that in the process of focusing on data security, they realize just how exposed they really are, and they also realize that there is something they can do about it that really isn’t that difficult.

For example, with our InfoSphere Guardium technology, we tend to see companies invest much more heavily in the technology after their initial implementation success – often as much as 5-10x within six months of their first purchase as they expand the scope of their security controls. I think part of this is due to how quickly they are able to roll the technology out. For example, a European telco company rolled out InfoSphere Guardium to 12 data centers within 2 weeks earlier this year. It shows that once people begin to dig into their actual exposure, and see how easy it is to fix, they suddenly become more proactive.

And being reactive might not be a choice for much longer. The U.S. Commerce Department this week released a report that calls for a new office to be created focusing on corporate information privacy policy. The New York Times reported on it this morning. If things continue down this path, the U.S. will be following in the footsteps of many of the Central European countries (and more recently China) who have enacted similar legislation to force companies to do more to protect their data.

The question is – are you prepared?

Tagged ,