I see that Microsoft lost an appeal in the “Custom XML” litigation, and may be forced to disable that functionality in Microsoft Office. This is a short backgrounder explaining what “Custom XML” is about, and why nobody should care.

Extensibility · The “X” in XML stands for “Extensible” and that’s because anybody can invent an XML language. You could post a chunk of text on the Internet like so:
<tool tier="frobnosticators">X311-J</tool>
And you wouldn’t have to ask anyone’s permission to use the terms “tool”, “tier”, or “frobnosticate”, because of the extensibility.

Of course, doing that isn’t very useful. Most people who use XML work with handy pre-cooked sets of tags like RSS or Atom or XHTML or ODF or OOXML, and leave the extensibility to the people who do the pre-cooking for us. Most times, you never actually see the XML.

History · But back the era of XML’s predecessor SGML, and even somewhat into the XML era, there was this vision that everyone should go out and invent new XML languages to meet their own particular business requirements. I think that these days, most people have come around to the view that you shouldn’t do that.

Back in the day, the dream was that you’d create your new language, and then you’d empower people to generate new documents in that language, using a specialized SGML or XML editor. These things were general-purpose parser/editor frameworks that could be customized to meet the needs of any particular tag set; the idea being that they could then be used by non-technical subject-matter experts. The first wave were made by now-forgotten companies named “SoftQuad”, “Arbortext” and some others.

Unfortunately, it turned out that doing the customization was hard, and the market wasn’t that big, and nobody ever really made serious money. I see that the products (XMetaL, Arbortext) still exist, and are being used. Last time I checked, XMetaL was being used in the US House of Representatives to draft legislation, for example. But was never that big a product category.

Microsoft’s Variation · This brings us to Microsoft’s Custom XML. The idea is a hybrid; you can take the OOXML used for Office documents, extend it with some of your own private tags and attributes, and then customize Office to support authoring and processing.

At the time of the huge OOXML dogfight, one of the reasons Microsoft claimed that the world needed OOXML, even though there was already a perfectly-good ISO-standard XML office-document format, was that it enabled this wonderful customization feature.

People like me, who had experience with the extreme difficulty of doing this kind of customization, the extremely limited number of places where it made sense, and the high proportion of failure among people who tried to do it, shouted “That’s a bug!” Given that the number of organizations that deploy Office is huge, I bet Microsoft can trot out a few customers who’ve got good results with Custom XML. But I also bet that, first of all, the proportion who try is tiny and, second, that among those who do, few succeed in getting much business value.

So if Microsoft is forced to drop this particular feature, I’m pretty sure the customer pain will be strictly limited. Consider the nasty ratio between software complexity and market size; While Microsoft will never admit it, I wouldn’t be surprised if certain product managers in Redmond are doing high-fives right about now.



Contributions

Comment feed for ongoing:Comments feed

From: Erik Engbrecht (Dec 22 2009, at 13:26)

Your statement about most people using pre-cooked XML formats just doesn't sit well with me. While I suppose it is probably true, I see so many made-up-as-we-go XML formats and half-baked hyper complicated late to market standard formats for me to think putting much weight on such facts is wise.

[link]

From: Paul Davies (Dec 22 2009, at 13:41)

And Arbortext is still being within Sun to write and edit the vast majority of the documentation that is published on the docs.sun.com[SM] site.

[link]

From: Michael Kay (Dec 22 2009, at 13:53)

Tim, I think you're wrong on several counts.

Firstly, I think it's entirely reasonable to invent your own XML vocabularies for your own data. If you've got 20 thousand Excel spreadsheets containing employee appraisals and want to do something useful with the data, extracting it into XML and putting it in an XML database makes excellent sense, and a custom vocabulary is almost certainly going to work better than an "industry standard" that doesn't quite fit your data.

Secondly, I don't see the relevance of your remarks about authoring tools. We're talking here about extracting XML from office documents that already exist. Very often the authoring will continue to be in Word or Excel: that's the whole point; users can use the tools and the paper forms that they're familiar with, the XML is hidden.

Thirdly, it's possible to extract the data as a transformation from the generic office XML exported by these tools, but that's extremely hard work. I've done it that way several times, for various reasons, but the alternative of exporting custom XML is certainly not to be dismissed.

Fourth, whether the facility is useful or not, you seem to be suggesting that it's a good thing that Microsoft should have to pull a feature from their products because some tin-pot little company bags'ed the idea first and failed to make any money out of it. That is very far from a good thing: it's an idea that threatens to wreck the industry on which we depend for our livelihoods.

[link]

From: Stefan Tilkov (Dec 22 2009, at 14:26)

"I think that these days, most people have come around to the view that you shouldn’t do that. "

Gazillions of new XML vocabularies are invented in enterprisey settings all the time. Whether that's a good thing or not is debatable, but I think the idea that it's wrong to invent one's own XML language definitely hasn't become mainstream yet.

[link]

From: Peter Flynn (Dec 22 2009, at 15:56)

I'm about half-way between Tim and Mike on this.

Yes, people do still have a need to use their own XML vocabularies, but the position is improving as more users grok that inventing a new one for each application isn't clever.

Authoring tools suck, as we all know. A very few suck slightly less than the rest. It's possible to author even quite complex structures in Word, using Styles, and have XSLT do some heavy-duty interpretation to make sense of it. I've done this several times, and it's tricky and tedious but achievable. But nothing you can do will prevent the ambitious and thoughtless author from reinventing the wheel, sometimes several different times in the one document. It's *because* of the interface that they do this: most of them have never seen any other interface to documents in their entire lives.

But I think Tim has a point: the last place on earth we need this kind of extensibility is in OOXML (and not in ODF either, thank you very much). Both suck as file formats for preserving meaning because they are designed to preserve appearance at the expense of everything else. Which is fine for most documents, which are ephemera anyway, but not for the smaller number of really important things, for which people will continue to use vocabularies designed for the job.

[link]

From: Robert Young (Dec 22 2009, at 16:24)

Sounds like you're creeping, ever so silently, toward what F. Pascal has been saying (so have I before I read up Mssr. Pascal) for a while yet:

The fact is that in order for any data interchange to work, the parties must first agree on what data will be exchanged -- semantics -- and once they do that, there is no need to repeat the tags in each and every record/document being transmitted. Any agreed-upon delimited format will do, and the criterion here is efficiency, on which XML fares rather poorly...

-- Fabian Pascal/2005

Fact is, one must write code (if you don't load into a relational database) to parse (or load and run an xml parser if you've decided to use xml), then code to process the data. All based upon the "agreed upon" data structure. No way around it. You have to write code for the data structure, and whether that structure is in a schema/dtd/wherever. Unless you can write a universal code generator, which can understand any text ever written, you will never be able to use the schema/dtd to drive the processing of the data. You have to write that bespoke code each and every time. Then you get to write yet more code to then do the work of the data, if you haven't loaded into that RDMBS. Face it, xml adds only overhead, both in byte load and code load. The truth will out.

By ceding definition of some "standard" formats to "organizations" ("the people who do the pre-cooking"), you dismantle the very raison d'etre of xml. Again, not that I buy the argument in the first instance.

On the other hand, if you've inveigled management into believing that your xml stuff is actually clever, you get to keep writing lots more code. Sort of like what COBOL coders did in the 60's.

[link]

From: Peter da Silva (Dec 22 2009, at 16:53)

I use custom XML on a daily basis. I don't really care for XML (I think XML is overly verbose and poorly designed), or for Word (it's an appalling document format, horribly structured), but for communicating between components that already have XML parsers it's really easy to write an XSL transformation or create XPath definitions for components to pull out of a thick nasty gumbo of random tags.

Now, this doesn't effect me, because I had never even considered using Word for this... I'd rather write documents in raw HTML in a text editor from the '60s than try to do anything longer than a memo in Word... but I think the characterization of custom XML in this article is completely mistaken.

[link]

From: Peter Sefton (Dec 22 2009, at 17:18)

One of the big costs with the hybrid Word/XML stuff you don't mention here is that you not only have to develop the schema but code a UI for it in Word, while tip-toeing around Word's other markup. This costs a lot, is very fragile, limited to small islands of XML in amongst other document markup and is not backwards compatible with other versions of Word or inter-operable with other word processors.

Bottom line is that the business case for in-Word XML customization is even less compelling than you'd get using more a typical XML toolchain.

[link]

From: David Ing (Dec 22 2009, at 17:45)

Might be worth re-reviewing this one Tim.

In the Microsoft world the custom XML grammars added to MSOffice are fairly common within the dull but profitable 'enterprisey' world. Some slashdot readers will be coming away from this with an odd perspective, i.e. it's not a low impact thing.

I know of at least 3 companies in Vancouver alone whose products/systems would be impacted by this 'Custom XML' patent change.

Check out the area of 'Office Business Apps' where people have used schemas to bind data within their word docs (i.e. a mail-merge from hell etc) http://msdn.microsoft.com/en-us/office/aa905528.aspx

[link]

From: Gavin Nicol (Dec 22 2009, at 19:52)

I've actually written a few sets of VBA scripts that allow 'Custom XML' (i.e. my own specific tag set conforming to a DTD) to be imported/exported, preserving markup, formatting, etc.in the process. A PITA to set up the first time, but it has always been possible to anyone that cared to do it. My experience is that even getting people to use styles consistently is an uphill battle...

[link]

From: Matthew McKenzie (Dec 22 2009, at 21:05)

Let's consider Microsoft's stated position on Custom XML, as quoted in a Bloomberg News wire service article:

"The court upheld a verdict that has grown to $290 million, won by closely held I4i LP of Toronto. The dispute is over an invention related to customizing extensible markup language, or XML, a way of encoding data to exchange information among programs. Microsoft has called it an “obscure functionality.’’

Microsoft said it has been working on the change since the trial judge first ordered a halt in August and has “put the wheels in motion to remove this little-used feature from our products.’’

link: www.boston.com/business/technology/articles/2009/12/23/microsoft_alters_word_to_comply_with_court/

Note also that Custom XML support was never slated to appear in Office 2010.

Assuming that one is willing to take Microsoft at its word -- this is an "obscure" and "little used" feature -- then I think the burden of proof falls upon those who see the matter differently.

Setting aside the other points that have been raised in these comments, I think that Tim's original premise is entirely correct: removing Custom XML from MS Office is an issue only for a tiny fraction of Microsoft's business user base. I'm perfectly willing to consider quantitative evidence to the contrary, although that evidence clearly won't be coming from Microsoft, given its stated attitude towards the technology.

[link]

From: Isaac Rabinovitch (Dec 22 2009, at 21:23)

@Paul Davies:

A vast majority? I could be mistaken, but I believe most of the documents on DSC are hardware documents. The Arbortext-based toolset you refer to is by and for Sun's software documentation group (though one small hardware group has been experimenting with it). Even on the software side, not everybody uses it. There are a lot of folks at Sun who will give up FrameMaker when you pry it out of their cold dead hands.

Which is too bad. I've used those tools, and they're pretty impressive. But note that the toolset only exists because 10 years ago somebody decided to invest a lot of money in it. Epic is the least of it. There's content management, a cross-reference database, and really sophisticated software for generating XML, PDF, and other deliverables. Such a project would never get funding in most organizations -- including the current Sun.

[link]

From: Aristotle Pagaltzis (Dec 22 2009, at 22:02)

Robert Young:

> you will never be able to use the schema/dtd to drive the processing of the data. […] Face it, xml adds only overhead, both in byte load and code load.

your whole comment is so fallacious I cannot even call it an argument in good conscience.

First of all, the line of reasoning which leads you to dismiss grammars in XML should equally apply to schemata in SQL. (The reality is more complex in XML, which is why we have Schematron and Relax NG next to DTDs and WXS, but the point stands.)

Secondly, how you jump from a dismissal of grammars to the argument that XML as a whole is only overhead is a complete mystery. You have put forth no argument about how the latter follows from the former.

Aside from those, your “universal code generator” attempt at an argument is absurd, which is easily demonstrated by the fact that it can equally well be used dismiss any universal interchange format whatsoever, right down to character sets and text encodings.

Ultimately you have, as I said, presented no argument whatever.

[link]

From: Martin Probst (Dec 23 2009, at 03:44)

While I have never used this particular feature in Word, I thought it might be very beneficial in the long term.

As Michael Kay already said, businesses keep huge amounts of their knowledge in Office documents, which are as of now mostly unstructured heaps of goo.

I haven't really worked on it, but I fancy the idea to move this extremely unstructured data into something somewhat more structured, a bit like embedding microformats into Word documents.

This would make heaps of knowledge and data accessible to tooling, and would help many businesses. For example, there is a whole industry around legal search and discovery, mostly because there is no reliable or easy way to find out if a particular piece of text in an office document is a company, business sector, product, etc.

I know this whole "metadata" thing is not so much en vogue anymore, and I'm aware of the progress on automated classification and text analysis, but still, all of this could be so much easier.

So Microsoft being required to pull this feature seems to be both detrimental technology wise, and also the whole patent thing is simply outrageous.

[link]

From: len (Dec 23 2009, at 06:49)

More Spy Vs Spy. Frankly, I don't care. It's a 'do what works best for as many people as can express their needs and force the rest" proposition.

As for this:

"once they do that, there is no need to repeat the tags in each and every record/document being transmitted. Any agreed-upon delimited format will do, and the criterion here is efficiency, on which XML fares rather poorly..."

I've seen too many multi-company comma or space delimited file transfers fall apart over time to take that seriously. Fabian needs to do real work in real industry production environments for awhile where downstream products such as mapping rely on upstream weak input forms built in HTML forms by people who were told to get it done fast.

There is so much missing from the real world "agree to semantics" magic handededness it's barely worth commenting on.

[link]

From: Robert Young (Dec 23 2009, at 09:51)

Aristotle:

Ah, controversy; the gauntlet thrown. Respond?? Well, yes. To reiterate my point: Tim, he of xml, questions the base assertion of xml goodness, anybody should create xml schemas/grammars at will. If Tim can question such a basic assertion, where may this questioning lead him and others? I feel it leads to questioning the basic assertion of *use*, data interchange. And thus, off we go.

You cavil, but at no point do you address Pascal's (and my) point: all sides of the data exchange must know (and implement in application code) the semantics (external to the schema) of the data in order to process the data, and the presence of tags and schemas does not remove the requirement to code processing for each schema; only by reading and coding can that be done. Such code is coder specific, no matter how stringent the shops' rules; there is no guarantee that OrgA will code the same as OrgB. With relational catalogs, these are declarative and can be shared; even across database engines with minimal effort. Once established, the agreed upon catalogs can exchange data transparently; in fact, this is known as federation, and supports data exchange between foreign database engines with no database or application code required. There is, in addition to each engine's native catalog, the SQL standardized INFORMATION_SCHEMA.

So, to some quotes.

>> First of all, the line of reasoning which leads you to dismiss grammars in XML should equally apply to schemata in SQL.

No. You assert that xml schemas are semantically equivalent to SQL/relational catalogs. Not even close. Catalogs in SQL (relational) databases reside in the datastore and are far more expressive of relationships and constraints on the data (for one thing, they do not impose a hierarchical structure on the data) than any xml schema can ever hope to be and do not require code in any application to implement the rules, which requirement does exist with xml schemas. And, they don't travel with the data (the tags are the metadata), this is quite the whole point. SGML/HTML/XML as printer controls, which is what they were designed to do (screens count as printers) based on IBM and DEC precedent markups, is fine.

There is a host of historical and contemporary criticism of xml's mixed structure. This is not novel to me, I will admit. In any case, it was not I who dismissed grammars being created willy-nilly in the wild, but Tim; I merely congratulated and extended the argument. The assertion of xml as a superior data transfer mechanism (and storage, for that matter, but for different reasons) is my issue. Hierarchical data records, which is all any xml file can be, are less complex than what can be expressed in the relational model, which is why hierarchical datastores were dismissed decades ago, not least because all data is *forced* into some hierarchical structure. The attempt to "relationalize" xml with RefId and extensions is proof that the "thought leaders" in the xml world know what's missing. Last time I checked, xml processing knows only about the single "document", which means it must cram all the "relational" data (customer, order, order lines, inventory, etc.) into each and every one. If that's not excessive overhead, I don't know what is.

It's germane to point out the Chamberlin, he of XQuery and SQL, defined SQL *not* as an implementation of Codd's relational model (they both worked for IBM, but not together), but in defiance of it (IBM was not pleased with Codd for having revealed the inherent weakness in IMS a few short years after its release; they were late to the RDBMS party on purpose). That he would revert to type (he was an IMS guy) with XQuery says more about him than the worth of xml/XQuery.

>> Secondly, how you jump from a dismissal of grammars to the argument that XML as a whole is only overhead is a complete mystery.

Please. Two separate reasons to dismiss xml. If you don't need xml formatting to interchange data, then you don't need 1,070,561 xml grammars to interchange data and vice-versa. There is, again, a host of historical and contemporary criticism (not novel to me, by any stretch of the imagination) that data interchange in xml is weighed down by embedded metadata (all those damn tags) and redundant "relational" data. To deny such observations is intellectually dishonest. The only useful information in an xml document is the data, all of that other stuff exists to make the xml machinery work. Said data can be exchanged far more efficiently using other means. The very existence of "binary xml" and "compressed xml" speaks to kludge attempts to fix the bloat problem.

Return to first principles: data has to be exchanged in an agreed upon format, loaded into the rules based datastore (which holds the metadata; you could use IMS, but DB2 is better). The notion that the hierarchical data structure/datastore is necessarily superior is not supported by experience or theory. It is merely an assertion made by the xml crowd simply because without this assertion, xml no longer has an intellectual justification. The assertion that data is "naturally hierarchical" (you can find that quote from many xml zealots) is the bed rock of all xml justifications. But this is a *assertion*, not a fact. Even the "org chart" example, in my experience, is false. All of the many orgs I've worked in have used "matrix management" practices, leaving most employees with multiple reporting points. These are relational structures, not hierarchical.

The relational model, on the other hand, is simply and elegantly derived through maths. SGML, I'll note, was created by LAWYERS to pretty print there stuff, for crying out loud. Would you trust anything a LAWYER said??? Of course not. And, as contemporary industrial strength databases have shown, hierarchical data is just as easily stored and processed within the relational model. That was always true. The opposite is NOT. Hierarchical stores have been unable to implement relational semantics, claims for XQuery notwithstanding. Nor will they ever.

>> Aside from those, your "universal code generator" attempt at an argument is absurd, which is easily demonstrated by the fact that it can equally well be used dismiss any universal interchange format whatsoever.

No even close. A CSV format between two catalogs requires no coding be done at either end. There must be rules about order of submission to the receiver; independent tables first, dependent next. But that is such a simple rule to follow. The catalog either accepts or rejects, with nicely informative messaging of course, the data. No coding required. With xml schemas, there must be code at both ends. This code cannot reasonably be generated from either the xml file itself, or a dtd/schema. The schema processor can, at best, edit against the simple rules of the schema. The actual constraints have to be coded in the application's language, not the data language (SQL as an example), thus tieing the data to that siloed application. That was the COBOL way in 1965.

Which leads to the real reason xml was cleaved to by coders, java especially as we all know: it was a justification for thumbing the coder's nose at the relational (and OO to a degree) database, and justification to write yet more siloed application code. This, in my view, is not an intellectually pure raison d'etre. But coders, and their managers more so since their existence is based on budgets and headcounts, would rather have lots of themselves banging out text forever than, to mix a metaphor, slice the Gordian Knot. It's all so self serving.

[link]

From: Joe Powers (Dec 23 2009, at 10:51)

It nice to see a lot of people commenting on this subject that don't know anything about it.

.DOCX files are really .ZIP files and thus can contain multiple files. If you place an Custom XML file into .DOCX container, you can then create links between tags in the custom file to the fields in the MS-Word document. This will allow you to just edit the fields in MS-Word and have the changes reflected in the Custom XML file (ie: MS-Word becomes the editor of the XML file).

The i4i patent covers the ability to create an index TAG address array of an XML file to make data changes faster and easier. If you read the patent and then look at what MS-Word has to do to edit fields in this External XML file, you'll quickly understand that MS-Word must implement the patent to allow fields to link to the data's location in the XML file.

The term "Custom XML" is used in this case to mean "the ability to embed external XML data into a document".

[link]

From: Brian (Dec 23 2009, at 12:57)

So how does this impact the DOCX file format where you can embed custom XML files and then bind those to Word 2007 Content Controls? Is this functionality being disabled?

[link]

From: Jim Sterken (Dec 23 2009, at 14:59)

Tim, I agree that the Custom XML functionality won't be missed much in MS Word if that's how this turns out.

Regarding "now-forgotten" companies like Arbortext where nobody ever made any serious money though, Arbortext was acquired by PTC (www.ptc.com) in 2005 for $190M and is now being used by thousands of companies, including Sun and Oracle, to automate publishing of printed and online content. This is a large and growing market.

As others have commented, you need schemas to define the semantics so data interchange and processing can work. I agree though that having too many special purpose schemas is counter productive. DITA is a promising development in the publishing arena. It defines a single, content - oriented schema that can can be specialized in a constrained, yet fairly general way where needed. DITA has gotten a little too complicated, but it is providing enough of a framework to allow building of better content authoring UIs.

[link]

From: Simon Phipps (Dec 23 2009, at 16:03)

Amazed to see that the disjoint worldviews of the dataheads and the docheads are still alive and well after more than a decade. As a dochead, Tim is exactly right; as a datahead, largely wrong. Ne'er the twain shall meet.

[link]

From: P. Sismondi (Dec 23 2009, at 18:37)

From the year 2000 to 2006 I was involved in a project that involved developing a custom DTD plus all kinds of ancillary software for government legislation. The project suffered many of the difficulties that Tim refers to in his article advising against rolling your own. The project eventually worked, more or less, but the carnage in terms of budget, personal anxiety, institutional conflict and so on was HUGE.

I helped review various authoring tools. The company that just won against MS (i4i) was one of the bidders. In the end we chose Arbortext. Authoring was just the beginning of our problems; developing a print solution was really tough. (It was early days for XSL-FO implementations.) There were other challenges beyond authoring and printing. I would not go there again.

However....

I have been away from the world of markup for a few years, and recently returned to it. I am surprised to learn (or maybe I'm not surprised) that there is a pretty big backlash in many quarters against XML generally. It was certainly oversold as a panacea back ten years ago, so maybe that's the reason for the current scorn.

As a really weird example, my current effort to learn Lisp (I know, I know :-) recently involved making an offer to help with the docbook manual for an open-source Common Lisp implementation. This ignited a holy war on the mailing list in which several expert Lispers denounced XML, and proposed that the *only sensible thing to do* would be to create a new markup language for documentation using s-expressions instead of XML!

However, in the end I think designing an XML language is no different than embarking upon any significant software engineering project: all are harder than they look, a many (most) fail dismally; hope and mind-boggling idiocy spring eternal in the world of software.

BTW, Michael Kay's books were my best and favourite learning tools back then. So I respect his opinion.

Best,

- P -

[link]

From: Gray Knowlton (Dec 23 2009, at 19:35)

Hi Tim,

I don't really want to engage in the larger debate here about the merits of custom-defined schemas in Open XML, but I did want to point out that the area of Word affected is much more narrow than custom-defined schema support.

Details: http://blogs.technet.com/gray_knowlton/archive/2009/12/23/what-is-custom-xml-and-the-impact-of-the-i4i-judgment-on-word.aspx

[link]

From: Michael Ruminer (Dec 23 2009, at 20:02)

In response to Joe Powers--- THANK YOU! I was about to give up that anyone on the thread actually realized what the Custom XML programmatic (for lack of a better word) feature of Microsoft Word really was.

I do suspect that there has never been wide adoptance of the capability but for those that do they likely have it deeply embedded into business processes and document workflows. I know that I have created a few systems that utilized Custom XML parts for Government and Enterprise systems.

I hate to think about what is going to have to be done to rework those systems. Ughhh...

[link]

From: Michael Ruminer (Dec 23 2009, at 20:18)

to Brian who asked:

"So how does this impact the DOCX file format where you can embed custom XML files and then bind those to Word 2007 Content Controls? Is this functionality being disabled?"

In short and in total: Yes.

Now what???

[link]

From: Michael Ruminer (Dec 23 2009, at 20:33)

Gary Knowlton pointed his url: http://blogs.technet.com/gray_knowlton/archive/2009/12/23/what-is-custom-xml-and-the-impact-of-the-i4i-judgment-on-word.aspx

I was incorrect in stating this affected Custom XML for Content Controls. Apparently not. Good news.

I'll sleep better tonight!

[link]

From: Patrick Durusau (Dec 26 2009, at 13:01)

Two quick points:

On Peter's remark on "preservation of meaning," see the drafts of ODF 1.2 that detail the addition of RDF based metadata for ODF documents. Demonstrations of how the metadata capabilities can enhance ODF documents are already in the planning stages.

MS needs to get behind patent reform in a big way. Leaving aside the facial absurdity of this patent in particular, that anyone can patent an idea for software threatens reasonable software development and marketing. Licensing a compiled binary makes sense, letting trolls patent ideas does not.

Patrick Durusau

[link]

From: Russel Gauthier (Dec 27 2009, at 20:21)

I just wanted to state that, although someone claimed that Microsoft stated that this feature is obscure & basically unused doesn't mean that their claim is true. This is obvious, but given the fact that they were being fined money, they aren't going to claim it is integral to Office, but will rather to make it look insignificant, because it if it appeared as though it were an integral part of Office, they would have tried suing for more.

That's really the only thing I have to say about this actually.

[link]

author · Dad
colophon · rights
picture of the day
December 22, 2009
· Technology (90 fragments)
· · XML (136 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!