Category Archives: API

Meet Your Makers

creation

We are in the midst of a new era of innovation, and an entire generation of makers is emerging. These makers are enabled by direct access to a range of capabilities and building blocks that were previously only available to multi-million dollar corporations. They have unprecedented control over both the digital and physical world, access to unlimited computing capacity, and an entire Internet of data to exploit. These makers are reshaping not only the technology landscape, but also the practices and opportunities of traditional businesses. If you haven’t done so already, it is time to meet your makers.

Makers can instantly download free developer tools and advanced runtime environments to build new applications. They can spin up Cloud computing infrastructure in minutes to run these applications, accessing tens or hundreds of thousands of dollars worth of computing infrastructure without any up front costs. They can choose from thousands of open APIs to add key capabilities into their applications, incorporating the best data and the best functionality available in the market without outlaying a penny of capital expense. Perhaps most amazingly, makers don’t need to be particularly sophisticated to take advantage of all of this – this is a mass movement, not an exclusive one.

The maker generation has been empowered by the removal of three key barriers that have traditionally kept this type of innovation in the hands of large corporations:

  1. Economics
  2. Closed Systems
  3. Technological Complexity

Economic Barriers
The removal of economic barriers through the availability of Cloud computing has been a huge factor in the rise of the maker. Using Cloud services, developers have access to unlimited processing power, storage, and network infrastructure. They can also easily deploy applications across geographic boundaries, lowering the barriers to entering new markets. Pay-as-you-go models are standard, and elasticity is built-in, to allow makers to experiment at a very low cost, but easily scale to meet bursting demand when ideas catch on.

But the lowering of economic barriers has not been limited to Cloud. Universal mobile and Wi-Fi connectivity, with commoditizing cost structures, has empowered anything, anywhere to be connectable. As they dream up their designs, makers can assume connectivity with a relatively high degree of reliability.

And perhaps the biggest and most current disruption is in the economics of microelectronics. Computers that would have powered businesses thirty years ago can now be shrunken down to postage stamp sizes. Battery technologies have evolved to remarkable lifespans, and energy to charge batteries can be collected from a variety of sources, including body heat and movement. And yet, with all this advancement, makers can buy an LTE capable microprocessor on the open market for under $10.

Closed System Barriers
While many of the early computing companies built their businesses on closed systems, computer systems have gradually evolved toward openness, inspired by Internet technologies like TCP/IP and HTTP, and communications technologies like Wi-Fi and GSM. Open programming frameworks like Java, and data formats like XML and JSON have lowered the barriers to interoperability, enabling makers to build new systems capable of interacting with the old. Open lightweight protocols like Bluetooth LE and MQ-TT have provided ways to easily bridge between the digital and physical world.

The most recent wave of technology innovation over the past ten years has produced advancements in open software technology like Hadoop, columnar databases, and document stores, all of which provide the tools for makers to manage and analyze huge volumes of data. And even commercial software companies now routinely offer their products through free download for development use, providing makers with limitless options without having to settle for second-class capabilities.

Technological Complexity Barriers
In my view, the biggest barrier to fall has been the one that has kept information technology in the control of a relatively small population of elite experts. The consumerization of technology, and the resulting simplification of its design, has created a huge accelerator for innovation, and vastly expanded the population of potential makers. Even 10 years ago, programming was mostly limited to technological whiz kids with advanced EE degrees or natural propensities toward mathematics and science. The barriers on the hardware side were even steeper, often requiring deep understanding of hardware architectures and embedded systems.

Today, technology can be used and controlled with a much more basic set of skills. In the Cloud, Platform as a Service technologies simplify traditionally complex tasks like configuring high availability and synchronizing data across data centers. Javascript has emerged as a low barrier programming language that simplifies the transition from client to server to database, while naturally extending to mobile devices. Even hardware has joined this wave, with technologies like the $25 Raspberry Pi that offer affordable and extensible hardware foundations for makers to build upon. And with 3D printers, even physical objects and prototypes can be created at a fraction of the cost and complexity of the past.

Perhaps most importantly, the drive toward simple Web APIs has inspired a whole new wave of Internet accessible capabilities with easy HTTP-based interfaces that can be learned in minutes. The result of this is a plethora of tools at the maker’s fingertips. Makers can combine data and functions from thousands of developers across thousands of companies, wiring together new applications in hours to achieve what would have taken weeks or months only a decade ago.

The reason why this is important is because these makers are driving much of the innovation happening in the technology marketplace today. These makers are changing business models, cross-pollinating capabilities and data into new markets, and opening up new channels. These makers are a potential innovation engine for your own data and capabilities. These makers think in new ways, find new uses for existing assets, and find ways to monetize things that were never thought of as valuable. Your competitors are likely dipping their toes into this innovation pool already, not relying on only their traditional IT teams to discover and drive innovation.

So where are these makers? With these barriers removed, they are emerging everywhere. Many of them likely exist in your own organization. They are out there thinking of an idea, perhaps searching for data, expertise, or capabilities that your organization could offer them. These makers are the people who will disrupt your market or lead your industry’s next great opportunity. If they aren’t empowered by your point of view, they will find other means to achieve their goals, many of which may directly compete with your own.

I suggest you make an effort to reach out and meet your makers and empower them before the opportunity passes you by.

Advertisements
Tagged

A recipe for the Internet of Things

Seemingly every day a new story pops up about the Internet of Things, as new devices and wearables are launched into the market, and large enterprises contemplate the possibilities of a connected world. I’ve spent quite a bit of time discussing the requirements for taking advantage of these capabilities with organizations ranging from automobile manufacturers, to consumer electronics manufacturers, to industrial manufacturers, to city governments. What I’ve seen is a recurring pattern that acts as a guide to what’s needed to capitalize on the Internet of Things, so I thought I would share some of those thoughts.

Registration and Device Management – The first thing that is needed to support the Internet of Things is a way to easily register a device onto a network, whether that is a simple one-to-one connection between a device and a mobile phone, a home network router, or a cloud service. Self-registration is often ideal for personal devices, but curated registration through APIs or a UI is often better for more security conscious applications. The registration process should capture the API of the device (both data and control) and define the policies and data structures that will be used to talk to the device if those things are not already known in advance. Once registered, the firmware on the device sometimes needs to be remotely updateable, or even remotely wiped in cases with higher security risk.

Connectivity – Connectivity requirements vary quite a bit based on application. Some devices cannot feasibly maintain a constant connection, often due to power or network constraints, and sometimes periodic batched connections are all that is needed. However, increasingly applications require constant real-time connectivity, where information is streamed to and from the device.

Many Internet of Things architectures have two tiers of communications – one level that handles communication from devices to a collector (which may be a SCADA device, a home router, or even a mobile phone), and another that collects and manages information across collectors. However, there is an increase in the number of direct device connections to the Cloud to take advantage of its inherent portability and extensibility benefits (for example, allowing information from an activity tracking wearable to be shared easily across users or devices, or plugged into external analytics that exist only in the Cloud).

An emerging trend for connectivity is to use publish/subscribe protocols like MQTT, which optimize traffic to and from the device (or collector), and have the added benefit of being inherently event-oriented. They also allow anything to subscribe to anything else, offering an easier way to layer capabilities onto Internet of Things scenarios. This essentially allows every device to both publish and subscribe to anything else, effectively giving each device its own API. However, unlike point-to-point protocols, the ability to address devices through topics reduces the overhead of managing so many individual connections, allowing devices to be addressed in logical groupings. MQTT also has the benefit of being less taxing on network and battery infrastructure than polling based mechanisms since it pushes data from devices only when needed and requires little in the way of headers. It is also harder to spoof since the subscriptions are managed above the IP layer.

Security & Privacy – Security has quickly risen to the top of most requirements for Internet of Things, simply because software-enabled physical infrastructure has some pretty severe implications if compromised. Whether the concern involves controls on industrial or city infrastructure, or simply data privacy and security, there is no way around the issue. Devices need to take some responsibility here by limiting the tamper risk, but ultimately most of the security enforcement needs to be done by the things the devices connect into. Transport layer security is critical, but authentication, authorization, and access control also need to be enforced on both sides of the connection. Ideally, security should be enforced all the way up to the application layer, filtering the content of messages to avoid things like injection attacks from compromised devices. In cases where data is cached on the device, or the device has privileged access, remote wiping capability is also a good idea.

In the case of consumer devices, privacy is often the bigger issue. Today, most devices don’t offer much choice about how, when, and where they share information, but in the future the control needs to shift to the consumer, allowing them to opt in to sharing data. It is probable that this level of control will become something that home gateways control – allowing users to select with exactly which cloud services they would like to share specific device data. In any case, data privacy needs to be designed into Internet of Things networks from the start.

Big Data Analytics – Much of the promise of the Internet of Things is contained in the ability to detect and respond to important events within a sea of emitted data. Even in cases where data is not collected in real-time, immediate response may be desired when something important is detected. Therefore, the ability to analyze the data stream in real-time, and find the needles in the haystack, is critical in many scenarios. Many scenarios also require predictive analytics to optimize operation or reduce risk (for example, predicting when something is likely to fail and proactively taking it offline). Ideally, analytic models can be developed offline using standard analytics tools and then fed into the stream for execution.

In addition to real-time event analytics, offline data analytics are important in Internet of Things scenarios. This type of analytic processing is typically run in batch against much larger static data sets in order to uncover trends or anomalies in data that might help provide new insights. Hadoop-based technologies are capable of working against lower cost storage, and can pull in data of any format, allowing patterns to be detected across even unrelated data sets. For example, sensor data around a failure could be analyzed to try to recognize a pattern, but external sources such as space weather data could also be pulled into the analysis to see if there were external conditions that led to the failure. When patterns are detected using big data analytics, the pattern can be applied to real-time event analytics to detect or predict the same conditions in real-time.

Mediation and Orchestration – In addition to analytics, there needs to be some level of mediation and orchestration capability in order to recognize complex events across related devices, coordinate responses, and mediate differences across data structures and protocols. While many newer devices connect simply over standard protocols like HTTP and MQTT, older devices often rely on proprietary protocols and data formats. As the variety of sensors increases, and as multiple generations of sensors are deployed on the same networks, mediation capabilities allow data to be normalized into a more standard set of elements, so that readings from similar types of sensors that produce different data formats can be easily aligned and compared.

Orchestration is also important in allowing events across sensors or devices to be intelligently correlated. Since the vast majority of data created by the Internet of Things will be uninteresting, organizations need a way to recognize interesting things when they happen, even when those things only become interesting once several related things occur. For example, a slight temperature rise in one sensor might not be a huge issue, until you consider that a coolant pump is also experiencing a belt slippage. Orchestration allows seemingly disconnected events to be connected together into a more complex event. It also provides a mechanism to generate an appropriate, and sometimes complex, response.

Data Management – As data flows from connected devices, the data must be managed in a way that allows it to be easily understood and analyzed by business users. Many connected devices provide data in incremental updates, like progressive meter reads. This type of data is best managed in a time series, in what is often called a historian database, so that its change and deviation over time can be easily understood. For example, energy load profiles, temperature traces, and other sensor readings are best understood when analyzed over a period of time. Time series database techniques allow very high volumes of writes to occur in the database layer without disruption, enabling the database to keep up with the types of volumes inherent in Internet of Things scenarios. Time series query capabilities allow businesses to understand trends and outliers very quickly within a data stream, without having to write complex queries to manipulate stubborn relational structures.

Another important dimension to Internet of Things data is geospatial metadata. Since many connected things can be mobile, tracking location is often important alongside the time dimension. Geospatial analytics provide great value within many Internet of Things scenarios, including connected vehicles and equipment tracking use cases. Including native geospatial capabilities at the data layer, and allowing for easy four-dimensional analysis combining time series and geospatial data, opens more possibilities for extracting value from the Internet of Things.

With the volume of data in many Internet of Things settings, the cost of retaining everything can quickly get out of control, so it needs to be governed by retention policies. The utility of data degrades fairly quickly in most scenarios, so immediate access becomes less important as it ages. Data retention policies allow the business to determine how long to retain information in the database layer. Often this is based on a specified amount of time, but it can also be based on a specific number of sensor readings or other factors. Complex policies could also define conditions under which default policies should be overridden, in cases where something interesting was sensed, for example.

Asset Management – When the lifecycle of connected things needs to be managed, asset management becomes a key capability. This is particularly important when dealing with high value assets, or instances where downtime equates to substantial lost revenue opportunity. Asset management solutions provide a single point of control over all types of assets — production, infrastructure, facilities, transportation and communications — enabling the tracking of individual assets, along with their deployment, location, service history, and resource and parts supply chain.

Asset management manages details on failure conditions and specific prescribed service instructions related to those conditions. It helps manage both planned and unplanned work activities, from initial request through completion and recording of actuals. It also establishes service level agreements, and enables proactive monitoring of service level delivery, and implementation of escalation procedures. By connecting real-time awareness with asset management, maintenance requirements can be more effectively predicted, repair cycles optimized, and assets more effectively tracked and managed.

Dashboards & Visualization – When dealing with large volumes of data, one of the best ways of understanding what is happening is to use visualizations and dashboards. These technologies allow information to be easily summarized into live graphical views that quickly show where problems and outliers may be hidden. Users can drill into potential problem areas and get more detail to be able to diagnose problems and propose resolution. Dashboards provide context to information and provide users with specific controls to address common issues. They provide a way to visually alert users to important data elements in real time, and then act on that information directly.

Integration – Integration into on-premise or Cloud-based back office systems of record is critical for many Internet of Things scenarios. Back office systems provide customer, inventory, sales, and supply chain data, and also provide access to key functions like MRP, purchasing, customer support, and sales automation. By integrating with these key systems, insights and events gained from the Internet of Things can be converted into actions. For example, ordering of parts could be automated when a failure condition is detected that predicts a pending failure, or a partner could be alerted to an opportunity to replenish an accessory when a low supply is detected.

Client SDK – The processing power of connected devices is continuously increasing. In addition to providing a connection to the Internet, many of these devices are capable of additional processing functions. For example, in some cases where connections are sporadic, on-device caching is desirable. In other cases, it makes sense to even run some filtering or analytics directly on higher-powered devices to pre-filter or manipulate data.

Providing a client SDK that enables these functions helps organizations who build these devices to innovate more quickly. It also enables third-party developers to drive their own innovations into these products. At a minimum, the ability to manage the client side of a publish-subscribe interaction is required. By enabling these capabilities in the SDK, chip and device manufacturers can optimize their opportunity and increase the utility of their offerings.

Want more detail? Check us out at ThingMonk December 2-3 in Shoreditch: THE conference to go to for Internet of Things!

Tagged

Protecting Against JSON-Bourne Attacks

Mobile has become the new #1 target for hackers and cyberattacks. As more consumers and businesses become more comfortable conducting business over mobile devices, this becomes a natural target for the baddies who want to steal personal information, or just disrupt business. And if you believe what the experts are saying, you need to be prepared that your mobile phone will eventually be hacked…  With the number of incidents of malicious code (particularly on Android) increasing by the day, it is vital that your organization is prepared.

That's JSON-bourne, not Jason Bourne...

That’s JSON-bourne, not Jason Bourne…

One of the main targets is not actually what is sitting on the phone, but instead the services that the app is accessing on the back end. These services present APIs that mobile apps invoke to get and put information into back end systems. In the mobile world, most of these APIs use a simple and concise data format called JSON to transmit this data and these requests. While most organizations protect these APIs using traditional firewall technologies, many are not doing enough to protect themselves from malicious content hidden in the JSON. According to IDC, “signature-based tools (antivirus, firewalls, and intrusion prevention) are only effective against 30–50% of current security threats.”

A similar issue arose in the height of SOA adoption, where protection against XML-bourne attacks became standard practice, but with the rise of mobile and lighter weight RESTful services, organizations need to shift to make sure they are protecting themselves against new threats.

The problem is that it is fairly easy to inject malicious content, buried in JSON data, into a seemingly innocuous REST call. Unless the malicious content matches one of the signatures that your firewall is watching for, the content will get through to the server, where it can be made to automatically execute as server-side Javascript. Since this code is generally executed in a less-protected area, it often has access to sensitive back-end systems, where it can do more damage or compromise private information.

Luckily, there is a way to protect against these threats without relying solely on good programming practices. Some of the same security gateways that organizations use to protect Web services can be easily extended to protect JSON/REST services. The best of these (like WebSphere DataPower) can be delivered as secure hardware appliances that prevent unauthorized tampering and provide FIPS 140-2 Level 3 certified protection. These gateways work by inspecting the data payloads and finding and filtering out suspect JSON data (among other things), providing a much deeper level of protection than traditional firewalls alone.

When you take a look at the OWASP top 10 threats, many of these remain relevant in JSON-centric applications. For example Cross-Site Scripting (XSS) and Cross-Site Request Forgeries (CSRF) are still concerns. In addition, hackers can inject very large JSON documents that can cause massive slow-downs in the systems that process those messages. However, the biggest threat is script injection – one that is a bit more specific to how JSON is processed in Javascript, and one that enables direct execution of functions on infected servers.

With all of the focus and spending on Mobile security, organizations need to be considering this threat as much as they are the threats to what is resident on the phone itself. I don’t think this has sunk in for many organizations, yet. Is your organization ready?

Tagged , , ,

It all changes when “things” get APIs

I’ve been spending an inordinate amount of time focused on the “Internet of Things” lately. I am impressed by how much buzz IoT has been getting lately, with a new connected device seemingly emerging each day. We’re in the early days, but things are moving fast.

In these early days, it seems that most connected “things” are producers of information. Sensors of many kinds are being deployed to collect information on all manner of things. Big data promoters and doomsayers alike rightly point to this information deluge as world-changing, allowing us to achieve an extra-sensory awareness of what is happening around us that opens up infinite new possibilities.

But it occurs to me that things will really get exciting when “things” expand beyond being just data producers, and begin to expose APIs that allow them to be controlled remotely. When a sensor can tell that a motor is overheating, that is good. When the overheating motor exposes an API that allows it to be remotely slowed or shut down automatically before it is ruined, that is even better.Washer Ad

We’re not that far away from this, really. Many of the new connected things are built to be controlled remotely, some with simple Web APIs. And with technologies like MQTT, it is relatively easy to retrofit traditional connected things to be controlled remotely. I think the next wave of consumer device innovation will focus on this – everyone will want a device router (or hack their Linksys router to act as one like T-Rob did), and everyone will expect their new appliances and consumer electronics to be connected so that they can tell you when they are having problems, and of course, be controlled remotely through an API.

“Did I turn off the dryer before we left?”

“I’m not sure – check your home control app”

It’ll be here before you know it…

Tagged , ,

Reliable messaging for mobile apps

James Governor wrote a nice post on Facebook’s expanding usage of MQ-TT. I’ve written about WebSphere MQ’s native telemetry transport (MQ-TT) capabilities in the past. The fact that Facebook is using MQ-TT protocol isn’t new news, they have actually been using it for their Facebook messenger capability for years. However, the fact that they are expanding their use because of the benefits of the protocol is definitely an encouraging development.

“…this week Facebook announced it would stop offering lame mobile experiences by offering a new native IoS client… and it is deepening its commitment to MQTT”

Read more: http://redmonk.com/jgovernor/2012/08/24/facebooks-new-native-ios-client-a-kingmaker-for-mqtt-ibm-facebook-no-shit/#ixzz25Wc7YcGf

With Facebook expanding their usage of MQ-TT, I would think it is a safe bet that MQ-TT is the dominant real-time messaging protocol for mobile apps. It isn’t surprising that other software vendors like Software AG are announcing support for it.

I talk to at least one organization every week who is looking for reliable real-time messaging between mobile apps and on-premise systems. Most customers are not wholesale adopting WebSockets yet (some because of compatibility concerns, some because of security and reliability concerns, some because of server resource usage concerns). Usually when I tell them they can simply extend the industry’s leading reliable messaging platform (WebSphere MQ) out to mobile devices, using the same management tools and skills they use today, they are both incredulous and thrilled. Add to that WebSphere MQ’s inherent security and reliability benefits, and the ability to support models like pub/sub, and you have a much more complete solution.

The benefits of MQ-TT are extraordinary when you consider battery and bandwidth usage. As compared to HTTPS polling on an Android 3G, MQ-TT has 93x higher throughput with much lower latency, while consuming 1/10th the bandwidth, and well over 10x less battery usage. The scalability of the protocol is probably best evidenced by Facebook messenger’s use, given they have over 350M mobile users.

Tagged , , ,

Enterprise Service Cloud in China

I am in China this week, meeting with customers and partners, and presenting on “Next Generation SOA”. Although China embraced the original wave of SOA, companies here are very quickly extending SOA into new areas. For example, the concept of Internet of Things is extremely advanced here, with most manufacturers instrumenting their equipment to enable better, more proactive response to maintenance issues. SOA is the underlying fabric of this.

While in Shanghai, I had the opportunity to meet with one of our China SI partners, Cap Gemini. They have coined the concept of an “Enterprise Cloud Bus,” which is a layer outside your ESB that exposes services and apps to the outside world. I like this thought, though I think “Enterprise Service Cloud” may be a better name. The idea is that there are sets of services and APIs that organizations want to expose internally and externally. These services may be traditional XML/SOAP services, or they may be JSON/REST services. They may interface to a combination of internal and external applications (cloud and on-premise). These same services can be used across multiple channels: internal applications, Web, partner applications, mobile, open APIs, even devices. Hence, the same service may have multiple interfaces and policies, and may even be presented as an open API rather than a traditional service.

We implement this pattern all the time, though often this layer is not called out separately from the ESB. However, the security and policy management requirements it raises, along with the need to support unpredictable load spikes due to the various ways the services can be accessed, make it prudent to look at this as a separate layer, even though technically many of these capabilities can be handled directly in the ESB. In fact, this concept originated with ESBs, which largely started out implementing service facade patterns on top of proprietary applications. The primary difference (and why many ESBs fail to enable this properly) is that the consumption model has become much more complex while the protocol has become markedly more open. As service/API consumers become more varied and plentiful, and more unknowns creep in around things like who will access, how, and from where, planning for this layer becomes imperative, and simply assuming your ESB provider can do it well is not a safe bet (even if they say they can).

20120713-184832.jpg

Tagged , ,

Monetizing APIs

Publishing open APIs is one thing, monetizing them is another thing altogether. So how can you make money by publishing APIs?

Adam DuVander of ProgrammableWeb published an excellent blog post on exactly this subject. The Intuit – Twilio story is a great example of how a new economy is evolving around this concept. While Twilio have based their entire model on usage-based charges on open APIs, Intuit has used APIs to enhance their existing services.

“So the story goes, an Intuit employee was checking out the Twilio API documentation on a Friday afternoon. Intuit is a large payroll and accounting software company that wouldn’t have been on Twilio business development’s radar, at least not back then. The Intuit employee looked at the public docs, signed up for a trial account and spend the weekend creating a prototype. On Monday she shared a system that now performs payee verification for the millions of employees processed by Intuit’s systems. All while the team from Twilio was none-the-wiser.”

Though Adam raises multiple other models for monetizing APIs, I think these two will be the predominant models in the market for some time. However, some experts predict that this economy will quickly become much  more dynamic.

Kin Lane posted his presentation from Gluecon that suggests the API economy will change much more dramatically, including the emergence of developer unions and social coding models. There are some really interesting thoughts here, so I suggest you take a look.

Amongst other things, Kin notes that SaaS and PaaS models will emerge that focus specifically on APIs, and that open source tools will emerge for publishing APIs and embeddable widgets. Some of this is already happening, but the possibilities are pretty exciting.

One of the amazing things that Kin points out is how far the API publishing world is ahead of the governance model that typically controls these things. Unlike Web Services, that for years was gated by standards emergence, APIs have flourished in a “Wild West” setting that has encouraged rapid evolution and proliferation, but left gaping holes in some areas. Specifically he notes that service and structure standards have yet to emerge, and that privacy and security are trailing current implementations.

That said, the API Management offerings in the market provide answers to many of these holes, which I believe is facilitating adoption by many large enterprises that would otherwise shy away from the risk.

Tagged ,