Tag Archives: Cloud

Introducing BlueMix

20140224-094449.jpg

Today, IBM unveiled a new platform for building and operating Cloud-native and dynamic hybrid cloud applications. I’m very excited about this announcement, not only because much of my portfolio has been mixed into it, but also because it is the vehicle by which I believe IBM will transform its business.

At its core, BlueMix is a platform as a service offering based on Cloud Foundry. But it is much more than that. We’ve invested a huge amount of code back into the core of Cloud Foundry, but we’re also extending what is possible in CF with our breadth of middleware capabilities. For example, we’ve extended the CF gateway natively with some of our DataPower gateway capabilities, to improve security control and traffic optimization. We’ve also extended CF’s management layer with operational intelligence and advanced performance management and analytics. And these are just a couple of examples.

From a DevOps perspective, we’ve hardened and optimized BlueMix on SoftLayer infrastructure, to provide excellent performance and seamless elasticity and operations, along with high availability and autoscaling. We’ve also created elastic Java (based on WebSphere Liberty) and JavaScript (based on Node.js) runtimes that can be used to run applications.

But the most exciting part of BlueMix for me is the new development paradigm. We’ve built a new UI for easily deploying your choice of runtime and binding any of a catalog of services to it in seconds. Scale and size of deployment is handled by the infrastructure, and easily configured throughout the UI. A cloud-based IDE is built-in, allowing live Code editing and immediate response with instant DevOps cycles.

The services catalog is already very rich, with a variety of services that assist in building mobile applications (e.g. mobile push notifications), building service resiliency (e.g. caching based on Extreme Scale, elastic MQ based on WebSphere MQ), or extending application capabilities (e.g. Watson Discovery Advisor). There are also a variety of third party services in the catalog, including open source services and several from third-parties like Twilio and Pitney Bowes. I expect the catalog to keep expanding on a weekly basis.

What all this adds up to is the most productive development experience I have ever seen from IBM. As organizations shift to cloud-first and hybrid cloud systems development, I believe BlueMix will be a significant differentiator for them. With BlueMix, IBM is demonstrating a true understanding of the change that Cloud represents for middleware, not just porting traditional products to the Cloud or redirecting attention to SaaS properties. Now that it is in open beta, we’ll see how customers respond.

Advertisements
Tagged ,

Systems of interaction

This week at IBM, a new term was coined: “systems of interaction” – to describe the integration across systems of engagement and systems of record. The idea is that you have systems focused on engaging with customers (systems of engagement) and other systems focused on transactions (systems of record), and the confluence of these helps drive interactions that can ultimately result in transactions for your business. That introduces new requirements for integration, security, reliability, and manageability across these domains. Find out more here: http://t.co/RfPAeohLKo.

Tagged , , ,

Enterprise Service Cloud in China

I am in China this week, meeting with customers and partners, and presenting on “Next Generation SOA”. Although China embraced the original wave of SOA, companies here are very quickly extending SOA into new areas. For example, the concept of Internet of Things is extremely advanced here, with most manufacturers instrumenting their equipment to enable better, more proactive response to maintenance issues. SOA is the underlying fabric of this.

While in Shanghai, I had the opportunity to meet with one of our China SI partners, Cap Gemini. They have coined the concept of an “Enterprise Cloud Bus,” which is a layer outside your ESB that exposes services and apps to the outside world. I like this thought, though I think “Enterprise Service Cloud” may be a better name. The idea is that there are sets of services and APIs that organizations want to expose internally and externally. These services may be traditional XML/SOAP services, or they may be JSON/REST services. They may interface to a combination of internal and external applications (cloud and on-premise). These same services can be used across multiple channels: internal applications, Web, partner applications, mobile, open APIs, even devices. Hence, the same service may have multiple interfaces and policies, and may even be presented as an open API rather than a traditional service.

We implement this pattern all the time, though often this layer is not called out separately from the ESB. However, the security and policy management requirements it raises, along with the need to support unpredictable load spikes due to the various ways the services can be accessed, make it prudent to look at this as a separate layer, even though technically many of these capabilities can be handled directly in the ESB. In fact, this concept originated with ESBs, which largely started out implementing service facade patterns on top of proprietary applications. The primary difference (and why many ESBs fail to enable this properly) is that the consumption model has become much more complex while the protocol has become markedly more open. As service/API consumers become more varied and plentiful, and more unknowns creep in around things like who will access, how, and from where, planning for this layer becomes imperative, and simply assuming your ESB provider can do it well is not a safe bet (even if they say they can).

20120713-184832.jpg

Tagged , ,

Integrating SAP with Saleforce.com

The race to SaaS has been impressive as SAP, Oracle, and Microsoft have scooped up a variety of SaaS vendors over the past 12 months. Meanwhile the SaaS vendors presumably deemed too expensive to buy, like Salesforce.com and Workday, have continued to thrive, beginning an acquisition wave of their own.

SaaS has clearly gotten the attention of the big application vendors like Oracle, which has bought SaaS plays Taleo and RightNow in recent months. In fact, Larry Ellison even (surprisingly) mentioned wins against Workday on his last earnings call. And in a typical display of Oracle Math, Mark Hurd claimed that Oracle is the second largest seller of online applications. Whether or not you drink the coolaid, it is clear that Oracle, SAP, and Microsoft are moving to SaaS, and moving there fast.

However, Salesforce.com continues to dominate the SaaS space, at least for salesforce automation, eating a big chunk of revenue out of Oracle’s former stronghold. I am continuing to see more and more companies of all sizes choosing Salesforce.com, many of them SAP and Oracle stalwarts. The good news for IBM is that whenever organizations choose one of these applications, they need a way to easily integrate it back into their on-premise systems. There is nobody better at this than IBM.

A great example of this is Philips Healthcare, who combined SAP and Salesforce.com together in less than two weeks using IBM technology. Stefan Katz, Director of Application Architecture at Philips, will be discussing this on a great upcoming webinar on June 22 at 10AM PT. I encourage you to register and check it out: Register Here.

Tagged , , , , ,

The Hybrid Enterprise

Interesting article on how SaaS + on-premise enterprise applications is emerging as the dominant model for application infrastructures, and is likely to remain so for some time (the author calls this Hybrid Enterprise). I agree with this assessment. I also think that a large amount of the SaaS integration requirement is not fully understood by many organizations. Integrating SaaS applications well is fundamental to obtaining the full value of the investment. Too often, organizations make integration an afterthought.

Tagged , ,

A million tiny pieces…

binaryThe trend toward cloud computing and massively parallel data processing has definitely hit the mainstream media. I can’t believe how much coverage the concept is getting on a weekly basis, despite the fact that few organizations have made much of an investment in this technology, outside of SMP-based parallel databases and ETL.

The impact on basic commercial information management software is potentially substantial. The traditional approach to managing information has been to pull it all together in one place and control its access through a DBMS or Content Repository, typically running on a huge Symmetric Multi Processor box. With the new approach, using infrastructure like Hadoop, that burden can be spread across many smaller servers which can be distributed across a broader geographic area (to match where the data is likely coming from). These servers work in tandem to be able to process much higher volumes of information, though in truth they act more like a distributed file system than like a DBMS. There is some belief that you won’t need or want those centralized DBMS’s once you have this technology. However, I believe a hybrid model will reign for at least the foreseeable future, since businesses need the more mature controls afforded by DBMS infrastructure. Plus, DBMS’s are naturally evolving  further toward cross-node parallelism (see IBM pureScale), which provides many of the same scalability benefits.

And not only are the DBMS advancing their internal architectures, but they are also beginning to provide seamless interoperability with these distributed file systems. An example of this trend can be seen in Quest’s recent announcement with Cloudera, where they are building adapters for Oracle to allow existing Oracle databases to be extended with Hadoop. This follows on the heels of IBM’s similar announcement about Hadoop support. I find it interesting that Oracle isn’t the one announcing this… they’ve been conspicuously silent on this topic (though they did publish this blog post on how to link the two using scotch tape and baling wire).

Intrestingly, the impact is not just on software. This NY Times article talks about the effect this same trend is having on hardware:

The focus instead is on taking chunks of information, chopping them up and spreading the data across thousands of computers and storage devices. It’s a divide-and-conquer approach to making the avalanche of data produced online manageable.

The article discusses how larger arrays of smaller processors are showing up in hardware, where computing tasks are not complex, but simply high in volume. The idea is that a smaller, less power-hungry chips (like those found in cell phones) can process simple things like Web requests just as effectively as more powerful chips. And these chips can be packed more densely into hardware while still consuming much less power and generating much less heat. Some interesting startups have started down this path with some promising offerings.

So keep an eye out for this trend, and make sure your vendors have a strategy for this, because it is likely to change the way you think about software and hardware in the near future.

Tagged , , , ,