Category Archives: cloud

Introducing BlueMix

20140224-094449.jpg

Today, IBM unveiled a new platform for building and operating Cloud-native and dynamic hybrid cloud applications. I’m very excited about this announcement, not only because much of my portfolio has been mixed into it, but also because it is the vehicle by which I believe IBM will transform its business.

At its core, BlueMix is a platform as a service offering based on Cloud Foundry. But it is much more than that. We’ve invested a huge amount of code back into the core of Cloud Foundry, but we’re also extending what is possible in CF with our breadth of middleware capabilities. For example, we’ve extended the CF gateway natively with some of our DataPower gateway capabilities, to improve security control and traffic optimization. We’ve also extended CF’s management layer with operational intelligence and advanced performance management and analytics. And these are just a couple of examples.

From a DevOps perspective, we’ve hardened and optimized BlueMix on SoftLayer infrastructure, to provide excellent performance and seamless elasticity and operations, along with high availability and autoscaling. We’ve also created elastic Java (based on WebSphere Liberty) and JavaScript (based on Node.js) runtimes that can be used to run applications.

But the most exciting part of BlueMix for me is the new development paradigm. We’ve built a new UI for easily deploying your choice of runtime and binding any of a catalog of services to it in seconds. Scale and size of deployment is handled by the infrastructure, and easily configured throughout the UI. A cloud-based IDE is built-in, allowing live Code editing and immediate response with instant DevOps cycles.

The services catalog is already very rich, with a variety of services that assist in building mobile applications (e.g. mobile push notifications), building service resiliency (e.g. caching based on Extreme Scale, elastic MQ based on WebSphere MQ), or extending application capabilities (e.g. Watson Discovery Advisor). There are also a variety of third party services in the catalog, including open source services and several from third-parties like Twilio and Pitney Bowes. I expect the catalog to keep expanding on a weekly basis.

What all this adds up to is the most productive development experience I have ever seen from IBM. As organizations shift to cloud-first and hybrid cloud systems development, I believe BlueMix will be a significant differentiator for them. With BlueMix, IBM is demonstrating a true understanding of the change that Cloud represents for middleware, not just porting traditional products to the Cloud or redirecting attention to SaaS properties. Now that it is in open beta, we’ll see how customers respond.

Advertisements
Tagged ,

Meet Your Makers

creation

We are in the midst of a new era of innovation, and an entire generation of makers is emerging. These makers are enabled by direct access to a range of capabilities and building blocks that were previously only available to multi-million dollar corporations. They have unprecedented control over both the digital and physical world, access to unlimited computing capacity, and an entire Internet of data to exploit. These makers are reshaping not only the technology landscape, but also the practices and opportunities of traditional businesses. If you haven’t done so already, it is time to meet your makers.

Makers can instantly download free developer tools and advanced runtime environments to build new applications. They can spin up Cloud computing infrastructure in minutes to run these applications, accessing tens or hundreds of thousands of dollars worth of computing infrastructure without any up front costs. They can choose from thousands of open APIs to add key capabilities into their applications, incorporating the best data and the best functionality available in the market without outlaying a penny of capital expense. Perhaps most amazingly, makers don’t need to be particularly sophisticated to take advantage of all of this – this is a mass movement, not an exclusive one.

The maker generation has been empowered by the removal of three key barriers that have traditionally kept this type of innovation in the hands of large corporations:

  1. Economics
  2. Closed Systems
  3. Technological Complexity

Economic Barriers
The removal of economic barriers through the availability of Cloud computing has been a huge factor in the rise of the maker. Using Cloud services, developers have access to unlimited processing power, storage, and network infrastructure. They can also easily deploy applications across geographic boundaries, lowering the barriers to entering new markets. Pay-as-you-go models are standard, and elasticity is built-in, to allow makers to experiment at a very low cost, but easily scale to meet bursting demand when ideas catch on.

But the lowering of economic barriers has not been limited to Cloud. Universal mobile and Wi-Fi connectivity, with commoditizing cost structures, has empowered anything, anywhere to be connectable. As they dream up their designs, makers can assume connectivity with a relatively high degree of reliability.

And perhaps the biggest and most current disruption is in the economics of microelectronics. Computers that would have powered businesses thirty years ago can now be shrunken down to postage stamp sizes. Battery technologies have evolved to remarkable lifespans, and energy to charge batteries can be collected from a variety of sources, including body heat and movement. And yet, with all this advancement, makers can buy an LTE capable microprocessor on the open market for under $10.

Closed System Barriers
While many of the early computing companies built their businesses on closed systems, computer systems have gradually evolved toward openness, inspired by Internet technologies like TCP/IP and HTTP, and communications technologies like Wi-Fi and GSM. Open programming frameworks like Java, and data formats like XML and JSON have lowered the barriers to interoperability, enabling makers to build new systems capable of interacting with the old. Open lightweight protocols like Bluetooth LE and MQ-TT have provided ways to easily bridge between the digital and physical world.

The most recent wave of technology innovation over the past ten years has produced advancements in open software technology like Hadoop, columnar databases, and document stores, all of which provide the tools for makers to manage and analyze huge volumes of data. And even commercial software companies now routinely offer their products through free download for development use, providing makers with limitless options without having to settle for second-class capabilities.

Technological Complexity Barriers
In my view, the biggest barrier to fall has been the one that has kept information technology in the control of a relatively small population of elite experts. The consumerization of technology, and the resulting simplification of its design, has created a huge accelerator for innovation, and vastly expanded the population of potential makers. Even 10 years ago, programming was mostly limited to technological whiz kids with advanced EE degrees or natural propensities toward mathematics and science. The barriers on the hardware side were even steeper, often requiring deep understanding of hardware architectures and embedded systems.

Today, technology can be used and controlled with a much more basic set of skills. In the Cloud, Platform as a Service technologies simplify traditionally complex tasks like configuring high availability and synchronizing data across data centers. Javascript has emerged as a low barrier programming language that simplifies the transition from client to server to database, while naturally extending to mobile devices. Even hardware has joined this wave, with technologies like the $25 Raspberry Pi that offer affordable and extensible hardware foundations for makers to build upon. And with 3D printers, even physical objects and prototypes can be created at a fraction of the cost and complexity of the past.

Perhaps most importantly, the drive toward simple Web APIs has inspired a whole new wave of Internet accessible capabilities with easy HTTP-based interfaces that can be learned in minutes. The result of this is a plethora of tools at the maker’s fingertips. Makers can combine data and functions from thousands of developers across thousands of companies, wiring together new applications in hours to achieve what would have taken weeks or months only a decade ago.

The reason why this is important is because these makers are driving much of the innovation happening in the technology marketplace today. These makers are changing business models, cross-pollinating capabilities and data into new markets, and opening up new channels. These makers are a potential innovation engine for your own data and capabilities. These makers think in new ways, find new uses for existing assets, and find ways to monetize things that were never thought of as valuable. Your competitors are likely dipping their toes into this innovation pool already, not relying on only their traditional IT teams to discover and drive innovation.

So where are these makers? With these barriers removed, they are emerging everywhere. Many of them likely exist in your own organization. They are out there thinking of an idea, perhaps searching for data, expertise, or capabilities that your organization could offer them. These makers are the people who will disrupt your market or lead your industry’s next great opportunity. If they aren’t empowered by your point of view, they will find other means to achieve their goals, many of which may directly compete with your own.

I suggest you make an effort to reach out and meet your makers and empower them before the opportunity passes you by.

Tagged

A recipe for the Internet of Things

Seemingly every day a new story pops up about the Internet of Things, as new devices and wearables are launched into the market, and large enterprises contemplate the possibilities of a connected world. I’ve spent quite a bit of time discussing the requirements for taking advantage of these capabilities with organizations ranging from automobile manufacturers, to consumer electronics manufacturers, to industrial manufacturers, to city governments. What I’ve seen is a recurring pattern that acts as a guide to what’s needed to capitalize on the Internet of Things, so I thought I would share some of those thoughts.

Registration and Device Management – The first thing that is needed to support the Internet of Things is a way to easily register a device onto a network, whether that is a simple one-to-one connection between a device and a mobile phone, a home network router, or a cloud service. Self-registration is often ideal for personal devices, but curated registration through APIs or a UI is often better for more security conscious applications. The registration process should capture the API of the device (both data and control) and define the policies and data structures that will be used to talk to the device if those things are not already known in advance. Once registered, the firmware on the device sometimes needs to be remotely updateable, or even remotely wiped in cases with higher security risk.

Connectivity – Connectivity requirements vary quite a bit based on application. Some devices cannot feasibly maintain a constant connection, often due to power or network constraints, and sometimes periodic batched connections are all that is needed. However, increasingly applications require constant real-time connectivity, where information is streamed to and from the device.

Many Internet of Things architectures have two tiers of communications – one level that handles communication from devices to a collector (which may be a SCADA device, a home router, or even a mobile phone), and another that collects and manages information across collectors. However, there is an increase in the number of direct device connections to the Cloud to take advantage of its inherent portability and extensibility benefits (for example, allowing information from an activity tracking wearable to be shared easily across users or devices, or plugged into external analytics that exist only in the Cloud).

An emerging trend for connectivity is to use publish/subscribe protocols like MQTT, which optimize traffic to and from the device (or collector), and have the added benefit of being inherently event-oriented. They also allow anything to subscribe to anything else, offering an easier way to layer capabilities onto Internet of Things scenarios. This essentially allows every device to both publish and subscribe to anything else, effectively giving each device its own API. However, unlike point-to-point protocols, the ability to address devices through topics reduces the overhead of managing so many individual connections, allowing devices to be addressed in logical groupings. MQTT also has the benefit of being less taxing on network and battery infrastructure than polling based mechanisms since it pushes data from devices only when needed and requires little in the way of headers. It is also harder to spoof since the subscriptions are managed above the IP layer.

Security & Privacy – Security has quickly risen to the top of most requirements for Internet of Things, simply because software-enabled physical infrastructure has some pretty severe implications if compromised. Whether the concern involves controls on industrial or city infrastructure, or simply data privacy and security, there is no way around the issue. Devices need to take some responsibility here by limiting the tamper risk, but ultimately most of the security enforcement needs to be done by the things the devices connect into. Transport layer security is critical, but authentication, authorization, and access control also need to be enforced on both sides of the connection. Ideally, security should be enforced all the way up to the application layer, filtering the content of messages to avoid things like injection attacks from compromised devices. In cases where data is cached on the device, or the device has privileged access, remote wiping capability is also a good idea.

In the case of consumer devices, privacy is often the bigger issue. Today, most devices don’t offer much choice about how, when, and where they share information, but in the future the control needs to shift to the consumer, allowing them to opt in to sharing data. It is probable that this level of control will become something that home gateways control – allowing users to select with exactly which cloud services they would like to share specific device data. In any case, data privacy needs to be designed into Internet of Things networks from the start.

Big Data Analytics – Much of the promise of the Internet of Things is contained in the ability to detect and respond to important events within a sea of emitted data. Even in cases where data is not collected in real-time, immediate response may be desired when something important is detected. Therefore, the ability to analyze the data stream in real-time, and find the needles in the haystack, is critical in many scenarios. Many scenarios also require predictive analytics to optimize operation or reduce risk (for example, predicting when something is likely to fail and proactively taking it offline). Ideally, analytic models can be developed offline using standard analytics tools and then fed into the stream for execution.

In addition to real-time event analytics, offline data analytics are important in Internet of Things scenarios. This type of analytic processing is typically run in batch against much larger static data sets in order to uncover trends or anomalies in data that might help provide new insights. Hadoop-based technologies are capable of working against lower cost storage, and can pull in data of any format, allowing patterns to be detected across even unrelated data sets. For example, sensor data around a failure could be analyzed to try to recognize a pattern, but external sources such as space weather data could also be pulled into the analysis to see if there were external conditions that led to the failure. When patterns are detected using big data analytics, the pattern can be applied to real-time event analytics to detect or predict the same conditions in real-time.

Mediation and Orchestration – In addition to analytics, there needs to be some level of mediation and orchestration capability in order to recognize complex events across related devices, coordinate responses, and mediate differences across data structures and protocols. While many newer devices connect simply over standard protocols like HTTP and MQTT, older devices often rely on proprietary protocols and data formats. As the variety of sensors increases, and as multiple generations of sensors are deployed on the same networks, mediation capabilities allow data to be normalized into a more standard set of elements, so that readings from similar types of sensors that produce different data formats can be easily aligned and compared.

Orchestration is also important in allowing events across sensors or devices to be intelligently correlated. Since the vast majority of data created by the Internet of Things will be uninteresting, organizations need a way to recognize interesting things when they happen, even when those things only become interesting once several related things occur. For example, a slight temperature rise in one sensor might not be a huge issue, until you consider that a coolant pump is also experiencing a belt slippage. Orchestration allows seemingly disconnected events to be connected together into a more complex event. It also provides a mechanism to generate an appropriate, and sometimes complex, response.

Data Management – As data flows from connected devices, the data must be managed in a way that allows it to be easily understood and analyzed by business users. Many connected devices provide data in incremental updates, like progressive meter reads. This type of data is best managed in a time series, in what is often called a historian database, so that its change and deviation over time can be easily understood. For example, energy load profiles, temperature traces, and other sensor readings are best understood when analyzed over a period of time. Time series database techniques allow very high volumes of writes to occur in the database layer without disruption, enabling the database to keep up with the types of volumes inherent in Internet of Things scenarios. Time series query capabilities allow businesses to understand trends and outliers very quickly within a data stream, without having to write complex queries to manipulate stubborn relational structures.

Another important dimension to Internet of Things data is geospatial metadata. Since many connected things can be mobile, tracking location is often important alongside the time dimension. Geospatial analytics provide great value within many Internet of Things scenarios, including connected vehicles and equipment tracking use cases. Including native geospatial capabilities at the data layer, and allowing for easy four-dimensional analysis combining time series and geospatial data, opens more possibilities for extracting value from the Internet of Things.

With the volume of data in many Internet of Things settings, the cost of retaining everything can quickly get out of control, so it needs to be governed by retention policies. The utility of data degrades fairly quickly in most scenarios, so immediate access becomes less important as it ages. Data retention policies allow the business to determine how long to retain information in the database layer. Often this is based on a specified amount of time, but it can also be based on a specific number of sensor readings or other factors. Complex policies could also define conditions under which default policies should be overridden, in cases where something interesting was sensed, for example.

Asset Management – When the lifecycle of connected things needs to be managed, asset management becomes a key capability. This is particularly important when dealing with high value assets, or instances where downtime equates to substantial lost revenue opportunity. Asset management solutions provide a single point of control over all types of assets — production, infrastructure, facilities, transportation and communications — enabling the tracking of individual assets, along with their deployment, location, service history, and resource and parts supply chain.

Asset management manages details on failure conditions and specific prescribed service instructions related to those conditions. It helps manage both planned and unplanned work activities, from initial request through completion and recording of actuals. It also establishes service level agreements, and enables proactive monitoring of service level delivery, and implementation of escalation procedures. By connecting real-time awareness with asset management, maintenance requirements can be more effectively predicted, repair cycles optimized, and assets more effectively tracked and managed.

Dashboards & Visualization – When dealing with large volumes of data, one of the best ways of understanding what is happening is to use visualizations and dashboards. These technologies allow information to be easily summarized into live graphical views that quickly show where problems and outliers may be hidden. Users can drill into potential problem areas and get more detail to be able to diagnose problems and propose resolution. Dashboards provide context to information and provide users with specific controls to address common issues. They provide a way to visually alert users to important data elements in real time, and then act on that information directly.

Integration – Integration into on-premise or Cloud-based back office systems of record is critical for many Internet of Things scenarios. Back office systems provide customer, inventory, sales, and supply chain data, and also provide access to key functions like MRP, purchasing, customer support, and sales automation. By integrating with these key systems, insights and events gained from the Internet of Things can be converted into actions. For example, ordering of parts could be automated when a failure condition is detected that predicts a pending failure, or a partner could be alerted to an opportunity to replenish an accessory when a low supply is detected.

Client SDK – The processing power of connected devices is continuously increasing. In addition to providing a connection to the Internet, many of these devices are capable of additional processing functions. For example, in some cases where connections are sporadic, on-device caching is desirable. In other cases, it makes sense to even run some filtering or analytics directly on higher-powered devices to pre-filter or manipulate data.

Providing a client SDK that enables these functions helps organizations who build these devices to innovate more quickly. It also enables third-party developers to drive their own innovations into these products. At a minimum, the ability to manage the client side of a publish-subscribe interaction is required. By enabling these capabilities in the SDK, chip and device manufacturers can optimize their opportunity and increase the utility of their offerings.

Want more detail? Check us out at ThingMonk December 2-3 in Shoreditch: THE conference to go to for Internet of Things!

Tagged

Systems of interaction

This week at IBM, a new term was coined: “systems of interaction” – to describe the integration across systems of engagement and systems of record. The idea is that you have systems focused on engaging with customers (systems of engagement) and other systems focused on transactions (systems of record), and the confluence of these helps drive interactions that can ultimately result in transactions for your business. That introduces new requirements for integration, security, reliability, and manageability across these domains. Find out more here: http://t.co/RfPAeohLKo.

Tagged , , ,

Enterprise Service Cloud in China

I am in China this week, meeting with customers and partners, and presenting on “Next Generation SOA”. Although China embraced the original wave of SOA, companies here are very quickly extending SOA into new areas. For example, the concept of Internet of Things is extremely advanced here, with most manufacturers instrumenting their equipment to enable better, more proactive response to maintenance issues. SOA is the underlying fabric of this.

While in Shanghai, I had the opportunity to meet with one of our China SI partners, Cap Gemini. They have coined the concept of an “Enterprise Cloud Bus,” which is a layer outside your ESB that exposes services and apps to the outside world. I like this thought, though I think “Enterprise Service Cloud” may be a better name. The idea is that there are sets of services and APIs that organizations want to expose internally and externally. These services may be traditional XML/SOAP services, or they may be JSON/REST services. They may interface to a combination of internal and external applications (cloud and on-premise). These same services can be used across multiple channels: internal applications, Web, partner applications, mobile, open APIs, even devices. Hence, the same service may have multiple interfaces and policies, and may even be presented as an open API rather than a traditional service.

We implement this pattern all the time, though often this layer is not called out separately from the ESB. However, the security and policy management requirements it raises, along with the need to support unpredictable load spikes due to the various ways the services can be accessed, make it prudent to look at this as a separate layer, even though technically many of these capabilities can be handled directly in the ESB. In fact, this concept originated with ESBs, which largely started out implementing service facade patterns on top of proprietary applications. The primary difference (and why many ESBs fail to enable this properly) is that the consumption model has become much more complex while the protocol has become markedly more open. As service/API consumers become more varied and plentiful, and more unknowns creep in around things like who will access, how, and from where, planning for this layer becomes imperative, and simply assuming your ESB provider can do it well is not a safe bet (even if they say they can).

20120713-184832.jpg

Tagged , ,

Integrating SAP with Saleforce.com

The race to SaaS has been impressive as SAP, Oracle, and Microsoft have scooped up a variety of SaaS vendors over the past 12 months. Meanwhile the SaaS vendors presumably deemed too expensive to buy, like Salesforce.com and Workday, have continued to thrive, beginning an acquisition wave of their own.

SaaS has clearly gotten the attention of the big application vendors like Oracle, which has bought SaaS plays Taleo and RightNow in recent months. In fact, Larry Ellison even (surprisingly) mentioned wins against Workday on his last earnings call. And in a typical display of Oracle Math, Mark Hurd claimed that Oracle is the second largest seller of online applications. Whether or not you drink the coolaid, it is clear that Oracle, SAP, and Microsoft are moving to SaaS, and moving there fast.

However, Salesforce.com continues to dominate the SaaS space, at least for salesforce automation, eating a big chunk of revenue out of Oracle’s former stronghold. I am continuing to see more and more companies of all sizes choosing Salesforce.com, many of them SAP and Oracle stalwarts. The good news for IBM is that whenever organizations choose one of these applications, they need a way to easily integrate it back into their on-premise systems. There is nobody better at this than IBM.

A great example of this is Philips Healthcare, who combined SAP and Salesforce.com together in less than two weeks using IBM technology. Stefan Katz, Director of Application Architecture at Philips, will be discussing this on a great upcoming webinar on June 22 at 10AM PT. I encourage you to register and check it out: Register Here.

Tagged , , , , ,

Are open APIs overtaking open source?

There has been an interesting ongoing discussion about how open APIs are becoming the new open source. Jay Lyman from 451 wrote an article published in TechNewsWorld in which he describes this phenomenon:

“There was a time 10 years ago or so when open source was “good enough” — that is, it served as a viable, often lower-cost, lower-hassle alternative to the proprietary software of the day. Today, all software is generally more open, and I believe we’ve reached a point when non-open source software is often “open enough.” The prime examples are cloud APIs from Amazon (Nasdaq: AMZN), which are neither open source nor open standards, necessarily, but are readily and widely available and tend to serve as the de facto standards of the day, including for open source plays on top, such as Eucalyptus. The fact is, Amazon Web Services APIs are open enough to facilitate the creation of integrations, connections and services despite the fact the underlying code is not open source.”

Although this is a compelling idea, I don’t fully agree that open source is being overtaken by open APIs. I think instead what is happening is that three converging technology trends are producing a new approach to deploying applications:

  1. SOA – at the foundation of all of this is SOA. Creating discrete, loosely coupled functions that are callable as a service is a pre-requisite to any of this working. The original promise of SOA included the concept of composite applications, which would dynamically string together best of breed functionality from across the IT landscape. As SOA has evolved to lighter, more open approaches like JSON and REST, the feasibility of this vision has improved dramatically.
  2. Cloud – if SOA was the foundation, the tipping point for change came in the form of Cloud. SaaS applications have absolutely disrupted the packaged application marketplace. Applications like Salesforce.com, Workday, and SugarCRM have compelled organizations to rethink their application strategy and extend their application base beyond their four walls. Since many of these SaaS applications were service-oriented and open api-oriented from day one, they really started the revolution. The SaaS-inspired shift to a best of breed approach for applications has opened up new possibilities for a variety of companies to offer their niche capabilities as open services.
  3. Social – with SOA as the foundation and Cloud as the tipping point, Social technology has acted as a catalyst. One of the primary use cases that has driven organizations to adopt this approach faster has been the desire to take advantage of social technology. Open API connections to things like Facebook and Twitter provide many new marketing and customer outreach opportunities. Salesforce.com has essentially rebranded what they offer as “Social Enterprise in the Cloud” to reinforce this connection.

So the net result of all of this has been that organizations have increasingly sought to build new applications on top of open APIs rather than building something from scratch using traditional on-premise technology installs. This, in turn, has opened an ecosystem of new opportunities where companies new and old can create new revenue streams or reach new customers by publishing niche services that can be hooked into these new applications. For example, Pitney Bowes, a company primarily known for postal metering, now publishes services for shipping and tracking that are becoming de facto standards for many of these new style of applications.

So this is an exciting new trend, but I don’t see it completely displacing on-premise systems anytime soon – open source or otherwise. There is a class of applications for which this approach is well-suited, but there are still things that will continue to run on-site in most organizations. That said, this class of applications has expanded rapidly since the inception of Cloud, but it is still not a complete replacement for on-premise systems.

What I am already seeing is that many of these on-premise systems are beginning to be extended to leverage external open APIs, often within the context of overlay business processes. You could surmise that is the beginning of the sea change that is likely to continue to come.

Tagged , ,