The Problem of Power

Listening to MIT’s Sanjay Sarma present a few weeks ago led me to think a bit about how we’re going to power all these connected devices in the Internet of Things. Mr. Sarma’s primary point is that RFID didn’t become ubiquitous until the lifetime cost of tags was reduced to pennies. Had the industry stuck with the conventional thinking around battery-powered devices with more powerful processors, it is unlikely RFID would have become as far-reaching.

The cost of batteries is dropping, and the lifespan we’re able to stretch out of them is increasing, but I don’t foresee lifetime costs (which combines the cost of the battery and the cost of replacing it when it fails) dropping to the under 10 cents range that Mr. Sarma was targeting. Even the best lithium battery technology can only be reliably stretched to a few years under constant load (even under low power) – and these batteries are still relatively expensive.

Some of the newer wide band communication protocols like 6LoWPAN and mesh networking like that proposed by Iotera can certainly help by reducing the energy requirements needed for long range communication to well below what is required by things like WIFI, Bluetooth, or Zigbee. But the reduced power needs don’t change the equation enough to meet Mr. Sarma’s goal for lifetime costs.

Energy Harvesting offers promise as a battery booster. Ambient energy can be harvested from heat, light, motion, pressure, chemical reaction, or other sources. These military boots that harvest energy from marching soldiers are a good example, but this technology is in its infancy and still relatively expensive, well outside of Mr. Sarma’s range.

But what Mr. Sarma proposed as a solution is actually quite interesting. He went back to his roots, and asked us to consider passive tags. Passive tags have no inherent power source. They are powered by a signal coming from a nearby “reader”. When the reader passes by, the tag is powered up and sends a response to the reader. The data transmitted from the tag can be anything it is able to sense.

At first blush, you may think (as I did), “didn’t we already discount Near Field Communication in IoT?”

Well on second thought, perhaps we really didn’t, and perhaps it does deserve some more consideration. If we are to get to 50 billion connected things in the next 5 years, we’ll need to expand beyond the consumer market. Businesses and governments will need to deploy lots of stuff, in all likelihood mostly sensors, and they’ll need to do it fast.

Some quick math, if you believe the 50 billion number, and then you assume 25% of those will be consumer devices (almost two for every human on earth), that leaves 38 billion for businesses. If you assume that 80% of those will be deployed by the Global 3000, that would mean each of these companies would be on average deploying almost 10 million connected things. At $10 each in five year cost, that would be $100 million. That seems to me to be on the high side (of course my assumptions are based on little science, but you get the point).

Passive tag technology reduces this number to $1,000,000 (assuming 10 cents per tag). And now it seems quite reasonable all of a sudden… in fact 50 billion devices now seems kind of low.

So could passive tags be the answer? Mr. Sarma raised the example of passive tags being used to detect termites, using one regular antenna on the tag and one antenna fashioned from wood. When only a single signal is detected from the sensor, you can deduce that the wooden antenna has been eaten and therefore you have a termite. Since the tags are so cheap, you can afford to put them everywhere, which enables you to average out the anomalies, and in the case of termites, pinpoint exactly where they are likely hiding out. Passive tag sensors now exist for everything from heat sensors, to chemical sensors, to moisture sensors.

I do believe that Mr. Sarma is on to something – at least for a specific class of sensor. It makes sense to me that if there is a cheaper sensor, that is what businesses will use. That said, there are many IoT scenarios that need much more than what passive sensors can provide, so there will likely be a mix of passive tags and more advanced applications with MCUs and batteries (and likely as much harvesting as is economically sensible).

So I take away two things from this: 1) Don’t discount the power of the passive tag. I believe it will play a role in IoT. 2) If passive tags do begin to dominate the sensor space, 50 billion is likely way too low of an estimate for the number of connected things.

Tagged , , ,

My IBM Impact 2014 Keynote Demo

Here is a video of our main stage Internet of Things demo at IBM Impact 2014. Thanks to Mychelle Mollot for her quick wit and strong presence.

http://www.youtube.com/watch?v=z7EGJZoQUv4&t=16m26s

20140502-224333.jpg

Tagged

Pondering security in an Internet of things era

arduino lockIt hasn’t taken long for the question of security to rise to the top the list of concerns about the Internet of Things. If you are going to open up remote control interfaces for the things that assist our lives, you have to assume people will be motivated to abuse them. As cities get smarter, everything from parking meters to traffic lights are being instrumented with the ability to remotely control them. Manufacturing floors and power transmission equipment are likewise being instrumented. The opportunities for theft or sabotage are hard to deny. What would happen, for example, if a denial of service attack were launched against a city’s traffic controls or energy supply?

Privacy is a different, but parallel concern. When you consider that a personal medical record is worth more money on the black market than a person’s credit card information, you begin to realize the threat. The amount of personal insight that could be gleaned if everything you did could be monitored would be frightening.

The problem is that the Internet of Things greatly expands the attack surface that must be secured. Organizations often have a hard enough time simply preventing attacks on traditional infrastructure. Add in potentially thousands of remote points of attack, many of which may not be feasible to physically protect, and now you have a much more complex security equation.

The truth is that it won’t be possible to keep the Internet of Things completely secure, so we have to design systems that assume that anything can be compromised. There must be a zero trust model at all points of the system. We’ve learned from protecting the edges of our enterprises that the firewall approach of simply controlling the port of entry is insufficient. And we need to be able to quickly recognize when a breach has occurred and stop it before it can cause more damage.

There are of course multiple elements to securing the Internet of things, but here are four elements to consider:

1) “Things” physical device security – in most scenarios the connected devices can be the weakest link in the security chain. Even a simple sensor that you may not instinctively worry about can turn into an attack point. Hackers can use these attack points to deduce private information (like listening in on a smart energy meter to deduce a home occupant is away), or even to infiltrate entire networks. Physical device security starts with making them tamper-resistant. For example, devices can be designed to become disabled (and data and key wiped) when their cases are opened. Software threats can be minimized with secure booting techniques that can sense when software on the devices has been altered. Network threats can be contained by employing strong key management between devices and their connection points.

Since the number of connected things will be extraordinarily high, on boarding and bootstrapping security into each one can be daunting. Many hardware manufacturers are building “call home” technology into their products to facilitate this, establishing a secure handshake and key exchange. Some manufacturers are even using unique hardware-based signatures to facilitate secure key generation and reduce spoofing risk.

2) Data security – data has both security and privacy concerns, so it deserves its own special focus. For many connected things, local on-device caching is required. Data should always be encrypted, preferably on the device prior to transport, and not decrypted until it reaches it’s destination. Transport layer encryption is common, but if data is cached on either side of the transport without being encrypted, then there are still risks. It is also usually a good idea to insert security policies that can inspect data to ensure that it’s structure and content is what should be expected. This discourages many potential threats, including injection and overflow attacks.

3) Network security – beyond securing the transmission of data, the Internet of things needs to be sensitive to the fact that it is exposing data and control interfaces over a network. These interfaces need to be protected by bi-lateral authentication, and detailed authorization policies that constrain what can be done at each side of the connection. Since individual devices cannot always be physically accessed for management, remote management is a must, enabling new software to be pushed to devices, but this also opens up connections that must be secured. In addition, policies needs to be defined at the data layer to ensure that injection attacks are foiled. Virus and attack signature recognition is equally important. Denial of service type attacks also need to be defensed, which can be facilitated by monitoring for unusual network activity and providing adequate buffering and balancing between the network and back end systems.

4) Detecting and isolating breaches – despite the best efforts of any security infrastructure, it is impossible to completely eliminate breaches. This is where most security implementations fail. The key is to constantly monitor the environment down to the physical devices to be able to identify breaches when they occur. This requires the ability to recognize what a breach looks like. For the Internet of things, attacks can come in many flavors, including spoofing, hijacking, injection, viral, sniffing, and denial of service. Adequate real-time monitoring for these types of attacks is critical to a good security practice.

Once a breach or attack is detected, rapid isolation is the next most important step. Ideally, breached devices can be taken out of commission, and remotely wiped. Breached servers can be cut off from sensitive back end systems and shut down. The key is to be able to detect problems as quickly as possible and then immediately quarantine them.

Outside of these four security considerations, let me add two more that are specifically related to privacy. Since so much of the Internet of things is built around consumer devices, the privacy risks are high. Consumers are increasingly back lashing against the surveillance economy inherent in many social networking tools, and the Internet of things threatens to take that to the next level.

Opt in – Most consumers have no idea what information is being collected about them, even by the social tools they use every day. But when the devices you use become connected, the opportunities for abuse get even worse. Now there are many great reasons for your car and appliances and personal health monitors to be connected, but unless you know that your data is being collected, where the data is going, and how it is being used, you are effectively being secretly monitored. The manufacturers of these connected things need to provide consumers with a choice. There can be benefits to being monitored, like discounted costs or advanced services, but consumers must be given the opportunity to opt in for those benefits, and understand that they are giving up some personal liberties in the process.

Data anonymization – when data is collected, much of the time, the goal is not to get specific personal information about an individual user, but rather to understand trends and anomalies that can help improve and optimize downstream experiences. Given that, organizations who employ the Internet of things should strive to remove any personally identifying information as they conduct their data analysis. This practice will reduce the number of privacy exposures, while still providing many of the benefits of the data.

The Internet of things requires a different approach to security and privacy. Already the headlines are rolling in about the issues, so it’s time to get serious about getting ahead of the problem.

Tagged ,

Introducing BlueMix

20140224-094449.jpg

Today, IBM unveiled a new platform for building and operating Cloud-native and dynamic hybrid cloud applications. I’m very excited about this announcement, not only because much of my portfolio has been mixed into it, but also because it is the vehicle by which I believe IBM will transform its business.

At its core, BlueMix is a platform as a service offering based on Cloud Foundry. But it is much more than that. We’ve invested a huge amount of code back into the core of Cloud Foundry, but we’re also extending what is possible in CF with our breadth of middleware capabilities. For example, we’ve extended the CF gateway natively with some of our DataPower gateway capabilities, to improve security control and traffic optimization. We’ve also extended CF’s management layer with operational intelligence and advanced performance management and analytics. And these are just a couple of examples.

From a DevOps perspective, we’ve hardened and optimized BlueMix on SoftLayer infrastructure, to provide excellent performance and seamless elasticity and operations, along with high availability and autoscaling. We’ve also created elastic Java (based on WebSphere Liberty) and JavaScript (based on Node.js) runtimes that can be used to run applications.

But the most exciting part of BlueMix for me is the new development paradigm. We’ve built a new UI for easily deploying your choice of runtime and binding any of a catalog of services to it in seconds. Scale and size of deployment is handled by the infrastructure, and easily configured throughout the UI. A cloud-based IDE is built-in, allowing live Code editing and immediate response with instant DevOps cycles.

The services catalog is already very rich, with a variety of services that assist in building mobile applications (e.g. mobile push notifications), building service resiliency (e.g. caching based on Extreme Scale, elastic MQ based on WebSphere MQ), or extending application capabilities (e.g. Watson Discovery Advisor). There are also a variety of third party services in the catalog, including open source services and several from third-parties like Twilio and Pitney Bowes. I expect the catalog to keep expanding on a weekly basis.

What all this adds up to is the most productive development experience I have ever seen from IBM. As organizations shift to cloud-first and hybrid cloud systems development, I believe BlueMix will be a significant differentiator for them. With BlueMix, IBM is demonstrating a true understanding of the change that Cloud represents for middleware, not just porting traditional products to the Cloud or redirecting attention to SaaS properties. Now that it is in open beta, we’ll see how customers respond.

Tagged ,

Meet Your Makers

creation

We are in the midst of a new era of innovation, and an entire generation of makers is emerging. These makers are enabled by direct access to a range of capabilities and building blocks that were previously only available to multi-million dollar corporations. They have unprecedented control over both the digital and physical world, access to unlimited computing capacity, and an entire Internet of data to exploit. These makers are reshaping not only the technology landscape, but also the practices and opportunities of traditional businesses. If you haven’t done so already, it is time to meet your makers.

Makers can instantly download free developer tools and advanced runtime environments to build new applications. They can spin up Cloud computing infrastructure in minutes to run these applications, accessing tens or hundreds of thousands of dollars worth of computing infrastructure without any up front costs. They can choose from thousands of open APIs to add key capabilities into their applications, incorporating the best data and the best functionality available in the market without outlaying a penny of capital expense. Perhaps most amazingly, makers don’t need to be particularly sophisticated to take advantage of all of this – this is a mass movement, not an exclusive one.

The maker generation has been empowered by the removal of three key barriers that have traditionally kept this type of innovation in the hands of large corporations:

  1. Economics
  2. Closed Systems
  3. Technological Complexity

Economic Barriers
The removal of economic barriers through the availability of Cloud computing has been a huge factor in the rise of the maker. Using Cloud services, developers have access to unlimited processing power, storage, and network infrastructure. They can also easily deploy applications across geographic boundaries, lowering the barriers to entering new markets. Pay-as-you-go models are standard, and elasticity is built-in, to allow makers to experiment at a very low cost, but easily scale to meet bursting demand when ideas catch on.

But the lowering of economic barriers has not been limited to Cloud. Universal mobile and Wi-Fi connectivity, with commoditizing cost structures, has empowered anything, anywhere to be connectable. As they dream up their designs, makers can assume connectivity with a relatively high degree of reliability.

And perhaps the biggest and most current disruption is in the economics of microelectronics. Computers that would have powered businesses thirty years ago can now be shrunken down to postage stamp sizes. Battery technologies have evolved to remarkable lifespans, and energy to charge batteries can be collected from a variety of sources, including body heat and movement. And yet, with all this advancement, makers can buy an LTE capable microprocessor on the open market for under $10.

Closed System Barriers
While many of the early computing companies built their businesses on closed systems, computer systems have gradually evolved toward openness, inspired by Internet technologies like TCP/IP and HTTP, and communications technologies like Wi-Fi and GSM. Open programming frameworks like Java, and data formats like XML and JSON have lowered the barriers to interoperability, enabling makers to build new systems capable of interacting with the old. Open lightweight protocols like Bluetooth LE and MQ-TT have provided ways to easily bridge between the digital and physical world.

The most recent wave of technology innovation over the past ten years has produced advancements in open software technology like Hadoop, columnar databases, and document stores, all of which provide the tools for makers to manage and analyze huge volumes of data. And even commercial software companies now routinely offer their products through free download for development use, providing makers with limitless options without having to settle for second-class capabilities.

Technological Complexity Barriers
In my view, the biggest barrier to fall has been the one that has kept information technology in the control of a relatively small population of elite experts. The consumerization of technology, and the resulting simplification of its design, has created a huge accelerator for innovation, and vastly expanded the population of potential makers. Even 10 years ago, programming was mostly limited to technological whiz kids with advanced EE degrees or natural propensities toward mathematics and science. The barriers on the hardware side were even steeper, often requiring deep understanding of hardware architectures and embedded systems.

Today, technology can be used and controlled with a much more basic set of skills. In the Cloud, Platform as a Service technologies simplify traditionally complex tasks like configuring high availability and synchronizing data across data centers. Javascript has emerged as a low barrier programming language that simplifies the transition from client to server to database, while naturally extending to mobile devices. Even hardware has joined this wave, with technologies like the $25 Raspberry Pi that offer affordable and extensible hardware foundations for makers to build upon. And with 3D printers, even physical objects and prototypes can be created at a fraction of the cost and complexity of the past.

Perhaps most importantly, the drive toward simple Web APIs has inspired a whole new wave of Internet accessible capabilities with easy HTTP-based interfaces that can be learned in minutes. The result of this is a plethora of tools at the maker’s fingertips. Makers can combine data and functions from thousands of developers across thousands of companies, wiring together new applications in hours to achieve what would have taken weeks or months only a decade ago.

The reason why this is important is because these makers are driving much of the innovation happening in the technology marketplace today. These makers are changing business models, cross-pollinating capabilities and data into new markets, and opening up new channels. These makers are a potential innovation engine for your own data and capabilities. These makers think in new ways, find new uses for existing assets, and find ways to monetize things that were never thought of as valuable. Your competitors are likely dipping their toes into this innovation pool already, not relying on only their traditional IT teams to discover and drive innovation.

So where are these makers? With these barriers removed, they are emerging everywhere. Many of them likely exist in your own organization. They are out there thinking of an idea, perhaps searching for data, expertise, or capabilities that your organization could offer them. These makers are the people who will disrupt your market or lead your industry’s next great opportunity. If they aren’t empowered by your point of view, they will find other means to achieve their goals, many of which may directly compete with your own.

I suggest you make an effort to reach out and meet your makers and empower them before the opportunity passes you by.

Tagged

Connected vehicle commercial

New commercial on our Connected Vehicle project at Continental. Check it out:

Tagged

A recipe for the Internet of Things

Seemingly every day a new story pops up about the Internet of Things, as new devices and wearables are launched into the market, and large enterprises contemplate the possibilities of a connected world. I’ve spent quite a bit of time discussing the requirements for taking advantage of these capabilities with organizations ranging from automobile manufacturers, to consumer electronics manufacturers, to industrial manufacturers, to city governments. What I’ve seen is a recurring pattern that acts as a guide to what’s needed to capitalize on the Internet of Things, so I thought I would share some of those thoughts.

Registration and Device Management – The first thing that is needed to support the Internet of Things is a way to easily register a device onto a network, whether that is a simple one-to-one connection between a device and a mobile phone, a home network router, or a cloud service. Self-registration is often ideal for personal devices, but curated registration through APIs or a UI is often better for more security conscious applications. The registration process should capture the API of the device (both data and control) and define the policies and data structures that will be used to talk to the device if those things are not already known in advance. Once registered, the firmware on the device sometimes needs to be remotely updateable, or even remotely wiped in cases with higher security risk.

Connectivity – Connectivity requirements vary quite a bit based on application. Some devices cannot feasibly maintain a constant connection, often due to power or network constraints, and sometimes periodic batched connections are all that is needed. However, increasingly applications require constant real-time connectivity, where information is streamed to and from the device.

Many Internet of Things architectures have two tiers of communications – one level that handles communication from devices to a collector (which may be a SCADA device, a home router, or even a mobile phone), and another that collects and manages information across collectors. However, there is an increase in the number of direct device connections to the Cloud to take advantage of its inherent portability and extensibility benefits (for example, allowing information from an activity tracking wearable to be shared easily across users or devices, or plugged into external analytics that exist only in the Cloud).

An emerging trend for connectivity is to use publish/subscribe protocols like MQTT, which optimize traffic to and from the device (or collector), and have the added benefit of being inherently event-oriented. They also allow anything to subscribe to anything else, offering an easier way to layer capabilities onto Internet of Things scenarios. This essentially allows every device to both publish and subscribe to anything else, effectively giving each device its own API. However, unlike point-to-point protocols, the ability to address devices through topics reduces the overhead of managing so many individual connections, allowing devices to be addressed in logical groupings. MQTT also has the benefit of being less taxing on network and battery infrastructure than polling based mechanisms since it pushes data from devices only when needed and requires little in the way of headers. It is also harder to spoof since the subscriptions are managed above the IP layer.

Security & Privacy – Security has quickly risen to the top of most requirements for Internet of Things, simply because software-enabled physical infrastructure has some pretty severe implications if compromised. Whether the concern involves controls on industrial or city infrastructure, or simply data privacy and security, there is no way around the issue. Devices need to take some responsibility here by limiting the tamper risk, but ultimately most of the security enforcement needs to be done by the things the devices connect into. Transport layer security is critical, but authentication, authorization, and access control also need to be enforced on both sides of the connection. Ideally, security should be enforced all the way up to the application layer, filtering the content of messages to avoid things like injection attacks from compromised devices. In cases where data is cached on the device, or the device has privileged access, remote wiping capability is also a good idea.

In the case of consumer devices, privacy is often the bigger issue. Today, most devices don’t offer much choice about how, when, and where they share information, but in the future the control needs to shift to the consumer, allowing them to opt in to sharing data. It is probable that this level of control will become something that home gateways control – allowing users to select with exactly which cloud services they would like to share specific device data. In any case, data privacy needs to be designed into Internet of Things networks from the start.

Big Data Analytics – Much of the promise of the Internet of Things is contained in the ability to detect and respond to important events within a sea of emitted data. Even in cases where data is not collected in real-time, immediate response may be desired when something important is detected. Therefore, the ability to analyze the data stream in real-time, and find the needles in the haystack, is critical in many scenarios. Many scenarios also require predictive analytics to optimize operation or reduce risk (for example, predicting when something is likely to fail and proactively taking it offline). Ideally, analytic models can be developed offline using standard analytics tools and then fed into the stream for execution.

In addition to real-time event analytics, offline data analytics are important in Internet of Things scenarios. This type of analytic processing is typically run in batch against much larger static data sets in order to uncover trends or anomalies in data that might help provide new insights. Hadoop-based technologies are capable of working against lower cost storage, and can pull in data of any format, allowing patterns to be detected across even unrelated data sets. For example, sensor data around a failure could be analyzed to try to recognize a pattern, but external sources such as space weather data could also be pulled into the analysis to see if there were external conditions that led to the failure. When patterns are detected using big data analytics, the pattern can be applied to real-time event analytics to detect or predict the same conditions in real-time.

Mediation and Orchestration – In addition to analytics, there needs to be some level of mediation and orchestration capability in order to recognize complex events across related devices, coordinate responses, and mediate differences across data structures and protocols. While many newer devices connect simply over standard protocols like HTTP and MQTT, older devices often rely on proprietary protocols and data formats. As the variety of sensors increases, and as multiple generations of sensors are deployed on the same networks, mediation capabilities allow data to be normalized into a more standard set of elements, so that readings from similar types of sensors that produce different data formats can be easily aligned and compared.

Orchestration is also important in allowing events across sensors or devices to be intelligently correlated. Since the vast majority of data created by the Internet of Things will be uninteresting, organizations need a way to recognize interesting things when they happen, even when those things only become interesting once several related things occur. For example, a slight temperature rise in one sensor might not be a huge issue, until you consider that a coolant pump is also experiencing a belt slippage. Orchestration allows seemingly disconnected events to be connected together into a more complex event. It also provides a mechanism to generate an appropriate, and sometimes complex, response.

Data Management – As data flows from connected devices, the data must be managed in a way that allows it to be easily understood and analyzed by business users. Many connected devices provide data in incremental updates, like progressive meter reads. This type of data is best managed in a time series, in what is often called a historian database, so that its change and deviation over time can be easily understood. For example, energy load profiles, temperature traces, and other sensor readings are best understood when analyzed over a period of time. Time series database techniques allow very high volumes of writes to occur in the database layer without disruption, enabling the database to keep up with the types of volumes inherent in Internet of Things scenarios. Time series query capabilities allow businesses to understand trends and outliers very quickly within a data stream, without having to write complex queries to manipulate stubborn relational structures.

Another important dimension to Internet of Things data is geospatial metadata. Since many connected things can be mobile, tracking location is often important alongside the time dimension. Geospatial analytics provide great value within many Internet of Things scenarios, including connected vehicles and equipment tracking use cases. Including native geospatial capabilities at the data layer, and allowing for easy four-dimensional analysis combining time series and geospatial data, opens more possibilities for extracting value from the Internet of Things.

With the volume of data in many Internet of Things settings, the cost of retaining everything can quickly get out of control, so it needs to be governed by retention policies. The utility of data degrades fairly quickly in most scenarios, so immediate access becomes less important as it ages. Data retention policies allow the business to determine how long to retain information in the database layer. Often this is based on a specified amount of time, but it can also be based on a specific number of sensor readings or other factors. Complex policies could also define conditions under which default policies should be overridden, in cases where something interesting was sensed, for example.

Asset Management – When the lifecycle of connected things needs to be managed, asset management becomes a key capability. This is particularly important when dealing with high value assets, or instances where downtime equates to substantial lost revenue opportunity. Asset management solutions provide a single point of control over all types of assets — production, infrastructure, facilities, transportation and communications — enabling the tracking of individual assets, along with their deployment, location, service history, and resource and parts supply chain.

Asset management manages details on failure conditions and specific prescribed service instructions related to those conditions. It helps manage both planned and unplanned work activities, from initial request through completion and recording of actuals. It also establishes service level agreements, and enables proactive monitoring of service level delivery, and implementation of escalation procedures. By connecting real-time awareness with asset management, maintenance requirements can be more effectively predicted, repair cycles optimized, and assets more effectively tracked and managed.

Dashboards & Visualization – When dealing with large volumes of data, one of the best ways of understanding what is happening is to use visualizations and dashboards. These technologies allow information to be easily summarized into live graphical views that quickly show where problems and outliers may be hidden. Users can drill into potential problem areas and get more detail to be able to diagnose problems and propose resolution. Dashboards provide context to information and provide users with specific controls to address common issues. They provide a way to visually alert users to important data elements in real time, and then act on that information directly.

Integration – Integration into on-premise or Cloud-based back office systems of record is critical for many Internet of Things scenarios. Back office systems provide customer, inventory, sales, and supply chain data, and also provide access to key functions like MRP, purchasing, customer support, and sales automation. By integrating with these key systems, insights and events gained from the Internet of Things can be converted into actions. For example, ordering of parts could be automated when a failure condition is detected that predicts a pending failure, or a partner could be alerted to an opportunity to replenish an accessory when a low supply is detected.

Client SDK – The processing power of connected devices is continuously increasing. In addition to providing a connection to the Internet, many of these devices are capable of additional processing functions. For example, in some cases where connections are sporadic, on-device caching is desirable. In other cases, it makes sense to even run some filtering or analytics directly on higher-powered devices to pre-filter or manipulate data.

Providing a client SDK that enables these functions helps organizations who build these devices to innovate more quickly. It also enables third-party developers to drive their own innovations into these products. At a minimum, the ability to manage the client side of a publish-subscribe interaction is required. By enabling these capabilities in the SDK, chip and device manufacturers can optimize their opportunity and increase the utility of their offerings.

Want more detail? Check us out at ThingMonk December 2-3 in Shoreditch: THE conference to go to for Internet of Things!

Tagged

The tattooed cyborg

IEEE Spectrum has an interesting article this month on the possibilities of e-skin. Since I’ve been deeply involved in multiple Internet of things projects lately, it immediately captured my imagination.

20130912-155617.jpg

Wearable technologies are already becoming mainstream with Google Glass and BodyMedia snatching headlines, but of course this takes it to a whole new level. By integrating circuitry into the skin through flexible adhesives or similar applications, the opportunities for expanding the senses become nearly endless. The article lists augmenting or replacing lost senses, monitoring health conditions, and adding better sensory perception to robots as possible applications, but it also speculates that a bionic skin could be used for extra-sensory communication by sensing the vibrations of the throat and translating that into transmitted signals. My mind immediately jumps to the warfighter, where this skin could provide better condition monitoring, sense the presence of hidden enemies, and identify dangerous chemicals in the air.

I also couldn’t help but think about the IBM Research on Neurosynaptic Chips. I wonder if these concepts will combine someday to produce prosthetics with full sensory capability. It’s an exciting thought, and it shows us that the stuff of science fiction seems to be becoming closer and closer to reality.

Tagged ,

Systems of interaction

This week at IBM, a new term was coined: “systems of interaction” – to describe the integration across systems of engagement and systems of record. The idea is that you have systems focused on engaging with customers (systems of engagement) and other systems focused on transactions (systems of record), and the confluence of these helps drive interactions that can ultimately result in transactions for your business. That introduces new requirements for integration, security, reliability, and manageability across these domains. Find out more here: http://t.co/RfPAeohLKo.

Tagged , , ,