I’ve seen verbiage on this echoing around the Net while the various Cloudy festivities go on down in the Bay Area. You could spend a lot of time partitioning and taxonomizing the interop problem, but I’d rather think of it from the business point of view.

The question that seems more important than all the rest is “Can I afford to switch vendors?” Let’s consider some examples.

  • When printers wear out, you can buy new printers from whoever with little concern for switching cost.

  • If you’re unhappy with your current servers, you can replace them with models from lots of vendors (Sun, Dell, HP, IBM, others) without worrying too much about compatibility (well, you may have some racking and cabling pain); the issues are price, performance, and support.

  • If you’re grouchy about your OS, you can move between *n*x flavors like Debian, SUSE, and Solaris pretty freely in most (granted, not all) cases; with maybe some deployment and sysadmin pain.

  • If you’re unhappy with your desktop environment, well too bad, you’re stuck. Your users are too deeply bought into some combination of Outlook calendaring and Excel macros and Sharepoint collab. The price of rebuilding the whole environment is simply too high for most businesses to consider.

  • If you’re unhappy with your Oracle licensing charges, you probably have to suck it up and deal with it. SQL is a good technology but a lousy standard, offering near-zero interoperability; the cost of re-tooling your apps so they’ll run on someone else’s database is probably unthinkable. Like they say, you date your systems vendor but you marry Larry Ellison.

It’s like this: CIOs looking at technologies don’t want to marry their cloud service provider. That’s all.



Contributions

Comment feed for ongoing:Comments feed

From: Jan Karlsbjerg (Jan 22 2009, at 18:54)

Interesting topic (of course I say that, I used to do research in this field :-)) and an interesting selection of examples.

- Printers: Pure commodity for all but the most specialized uses

- Sever hardware, server OS: Changing these only hurts in the server room and IT department, it doesn't affect the majority of computer using staff

- Desktop software: Yes, it hurts a lot of people, but not as much as you'd think, it can be done (the actual switching costs in time and money aren't really that big). But you'll never change any desktop software if you first put it to an all-staff vote. :-)

- DB software: Can also be done, hurts fewer individuals, but hurts each of those people more (deliberate recoding projects required). But huge potential license savings too

By the way, CIOs that I interviewed in my research about technology standards didn't use terms like "marry" to describe their past or upcoming choices. Instead some violent sexual metaphors were used. (I'm not kidding)

It seems to me that your post mostly discusses vendor lock-in and switching costs, more than actual interoperation between different systems. And I'm not sure that cloud services are all that different from non-cloud products and services when it comes to vendor lock-in and switching costs.

Cloud services may have little or no standardization, and obviously your data now lives in their postal code, etc. But organizations should still be able to keep some control of the relationship with the service provider... They can still threaten to take their business elsewhere:

1. They should insist on having the ability to export all their data out of the cloud service.

2. They should document their business processes in such a form that they can be migrated to a competing service

Hopefully there are competing service providers (and old-fashioned software companies) that are eager to help the company migrate both data and processes to their own product/service. Exactly like with other IT projects.

[link]

From: Walter Underwood (Jan 22 2009, at 20:37)

Retooling can be cheaper than renewing an Oracle license and the threat is excellent leverage. It has been done.

What are you more afraid of, your own code or Oracle? Hint: There is only one right answer.

I don't see a big issue with changing cloud providers, since you are still running your own software regardless of the cloud. Yes, you have to add indirection in your management, but you already had to do that to move from your own data centers to the cloud, so going from 2 to N is not such a big deal.

Cloud provider SLAs, now that is an interesting subject. Things like network latency between the nodes are off the table. Oops.

[link]

From: Stu (Jan 22 2009, at 23:52)

Sure, but it's not that simple.

- At what level should it be easy to switch providers? Storage, x86 VM, application, db schema?

- since cloud providers might provide one or more of these things, but in slightly different ways, wouldn't you have to collectively _understand_ the differences?

- isn't interoperability and integration more important? After all, CIOs aren't likely going to use the cloud providers much if it doesn't integrate with their exisiting data center.

Portability is important but is not the most important thing in IT. I point to Oracle, Microsoft and Apple's continued successes as examples (.NET still is huge; new Oracle databases get deploye all the time; the iPhone is pretty closed, but seems to be doing quite well)

[link]

From: Steve Loughran (Jan 23 2009, at 01:35)

Historically -well, since the PC took over from the mainframe and the minicomputer- you've had commodity hardware with non-commodity software. Either you were stuck in the OS (DOS, windows), or somebody owned your data (MS, Oracle, SAP). Whoever owned your data really owns you, especially now that virtualisation means that you can keep a copy of windows around for emergencies (as I type this on my ubuntu laptop, an XP vm in a window runs its weekly antivirus scan)

the cloud is very similar, except now the H/W and software is mixed. But whoever owns your data, owns you.

[link]

From: Bill de hÓra (Jan 23 2009, at 16:05)

It concerns me that the people going on about cloud inter-operation do not understand what the lock-in axis is - the volume of data - but instead are focusing on incidentals like application protocols and formats. This is something I think both Doug Cutting and I have commented on here in the past.

Explain how to shunt Terabytes (or Petabytes) of data and generated metadata from one storage back-end to another; then let's talk about inter-operation through the front.

[link]

From: Steve Loughran (Jan 24 2009, at 01:56)

Bill is right, once you start to upload a few TB of data -and give some URLS to it away- your switching costs are pretty steep.

But, amazon's S3 costs are less to xfer out the data than a month's storage of the same GB. Slightly. Which means if the far end offers a free upload service when migrating from S3, you could move off it for less than a month's storage. Then its only the network load of copying a petabyte from one datacentre to another that matters.

API-wise, I might have some slides at ApacheCon EU on my proposed Apache Cloud Computing Edition platform

[link]

author · Dad
colophon · rights
picture of the day
January 22, 2009
· Technology (90 fragments)
· · Internet (116 more)
· Business (125 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!