In a recent ongoing piece, I mentioned the “Canada Line”, a huge construction project currently disrupting Vancouver. Motivated in part by the 2010 Winter Olympics, it’s a subway/elevated train connecting the city core, the airport, and everything on the path between them, including a big strip of central Vancouver and Richmond, the suburb with the airport. (It’s called the “Canada Line” because the biggest chunk of funding is from the Federal, as opposed to provincial or city government). Since I’m writing for the Net, I wanted to link to it. I did a quick search for its Web site, which also turned up a pretty good Wikipedia entry on the subject. The question is, which to link to? The answer isn’t obvious.

Why Wikipedia? · One great thing about the Web is that everyone can be part of it, and another is that anyone can link to anyone, and a particularly great thing about blogs is that they do so a lot.

The trouble with the Web is that like everything else, per Sturgeon’s Revelation, 90% of it is crap. In particular, a lot of institutional sites are pathetic self-serving fluff served up in anodyne marketing-speak with horrible URIs that are apt to vanish.

Linking to the Wikipedia instead is tempting, and I’ve succumbed a lot recently. In fact, that’s what I did for the Canada Line. After all, the train is still under construction and there’s no real reason to expect today’s links to last; on top of which, the Line’s own site is mostly about selling the project to the residents and businesses who (like me) are getting disrupted by it, and the taxpayers who (like me) are paying for it.

Wikipedia entries, on the other hand, are typically in stable locations, have a decent track record for outliving transient events, are pretty good at presenting the essential facts in a clear, no-nonsense way, and tend to be richly linked to relevant information, including whatever the “official” Web site might currently happen to be.

At this point in history, the Wikipedia continues to look good, and the cheap-shot artists who dislike it when the general public is allowed to help write the Great Books, such as Bob “Wikipedia is like a public toilet” McHenry and Andrew “Khmer Rouge in Daipers (sic)” Orlowski, are looking bad. (The jury is still out on Nicholas Carr’s contention that the Wikipedia is inexorably evolving away from its populist roots into something like a conventional reference-publishing project.)

Why Not Wikipedia? · But this makes me nervous. I feel like I’m breaking the rules; being able to link to original content, without benefit of intermediaries, is one of the things that defines the Web. More practically, when I and a lot of other people start linking to Wikipedia by default, we boost its search-engine mojo and thus drive a positive-feedback loop, to some extent creating a single point of failure; another of the things that the Web isn’t supposed to have.

I’d be astonished if the Wikipedia suddenly went away. But I wouldn’t be very surprised if it went off the rails somehow: Commercial rapacity, legal issues, or (especially) bad community dynamics, we’ve seen that happen to a whole bunch of once-wonderful Internet resources. If and when it did, all those Wikipedia links I’ve used (396 so far, starting in June 2004) become part of a big problem.

[Let me be clear: I am an unabashed partisan of Wikipedia. I think it is a triumph, a piece of evidence that being human is no bad thing; and I’ll do what I can, when I can, to help it. If I think I see it going sideways I’ll be in there shrieking and fighting.]

Learn From History? · Obviously, the notion that writers should insert links to other relevant information didn’t first appear on the Web: it’s at the core of academic publishing. The word “scholarly” applies to work in which no assertion may stand unaccompanied by supporting evidence; I think scholarly is good.

Academic citations, the stuff of scholarship, are not simple one-way pointers, they are little bundles of metadata: a page (or section) number in a particular edition of a particular published work, identified by author, title, date, publisher, and so on. Given the way the library system works, there’s a high degree of confidence that, given the contents of a citation, you’ll be able to track down the original.

Web links are clearly transient and fragile by comparison (the appeal of linking to Wikipedia is precisely that, at the moment, it feels a little less so). On the other hand, you can click on them and follow them, right here right now; and you can spin “transient and fragile” the other way, as “dynamic and fresh”. I don’t think, on balance, that they’re a step backward.

Fragile Links · When I first started writing the ongoing articles and content, in parallel, I was thinking about a fragile="true" attribute on links, to be inserted when I thought that there was a good chance that the ongoing piece would outlive them. The idea was that I’d fiddle the stylesheet so that “fragile” links would look different, by way of a warning.

The idea was not entirely lame. But it needed me to make a whole bunch of judgment calls, and I quickly decided I probably wasn’t very good at guessing at longevity; so I discarded it.

Solution: Link Bundles? · If we really care about links being useful in the long term (and we should), maybe we need to abandon the notion that a single pointer is the right way to make one that matters. If I want to link to Accenture or Bob Dylan or Chartres Cathedral, I can think of three plausible ways: via the “official” sites, the Wikipedia entries, and Google searches for the names. [More generally, I should say: direct links, online reference-resource links, and search-based links. I’ll come back to that.]

What I want, then, is a link to a bunch of things at once. It turns out that there’s a perfectly good, if lightly-implemented, way to do this in XML, called XLink [Disclosure: I helped create it.]. It’s been lightly-implemented mostly I think because the browser writers just didn’t feel any particular pull for such a thing. This has struck me as a little odd because every financial Web site in the world is full of multi-ended links: every time they mention a company they’ll typically link to its share price, some analysis, and previous articles: check out almost any page at TheStreet.com, for example.

One Level of Indirection · I want more than multi-way links: I don’t want to jump into bed with Wikipedia or Google or any potential single point of failure. I’d be willing to bet that if Wikipedia goes off the rails and some new online reference resource comes along to compete, there’ll be an automated mapping between Wikipedia links and the new thing; so the actual URIs may retain some value. Similarly, a search string needn’t be tied to any one search engine.

Linking 2.0 · So here’s what I’d like: a way to write multi-ended links with simple indirection, and a reasonable way for users to display them in whatever browser they’re using. Fortunately, I have a nice link-rich testbed here at ongoing, with software I control, and the era of GreaseMonkey and AJAX, who needs to wait for the browser builders? Unless someone points out why this line of thinking is clueless or (much more likely) points at where someone’s already solved the problem, maybe I’ll take a run at the problem.



Contributions

Comment feed for ongoing:Comments feed

From: Hanan Cohen (Jan 20 2007, at 22:43)

I have also been thinking about those issues lately, specifically about linkrot (http://www.useit.com/alertbox/980614.html)

I think that the 404 comes from idea that things should work and if they don't, an error message should be displayed. As if the web is software and someone should be notified if there is a problem.

Since the web is a place, and getting old, the experience of being in this place should be flowing. We should do something for making the experience better, preventing potholes, for example.

CMS can check for valid links and not display them if they lead to nowhere. Browsers can also do that.

I think this is only the beginning of thinking about the web that already comes of age.

[link]

From: Justin (Jan 21 2007, at 01:02)

Linking to a concept rather than a particular website describing it is one of the roles Wikipedia seems to have assumed. For data-formats (not necessarily markup like HTML) willing to pay the RDF tax, the Semantic Web community has good solutions for this problem, using URLs returning 303 See Other redirects to resources, but which RDF aware agents can interpret as referring to the same subject and thus aggregate.

It seems that having a web-service allowing the creation of a URL for a particular concept, and then using the community and automated technologies (Google, Wikipedia, etc) to build persistent links for a particular topic or concept might be in order.

[link]

From: Danny (Jan 21 2007, at 02:22)

I think your description of the problem is on the nail (I have exactly the same feelings about Wikipedia). But there's one aspect that's there, but not really made explicit - hypertext links point to documents, yet what you are wanting to link to (Accenture, Bob Dylan or Chartres Cathedral) are *things*. This isn't just hypertext, it's hyperdata.

To capture that information a level of indirection is necessary, and part of the problem has been solved, how to model this kind of thing in the Web environment - Just Use RDF, e.g. in <a href="http://www.dajobe.org/2004/01/turtle/">Turtle</a> syntax :

[

a foaf:Person ;

foaf:name "Bob Dylan" ;

foaf:homepage <http://bobdylan.com> ;

dc:related <http://en.wikipedia.org/wiki/Bob_Dylan>

]

This describes Bob the person, and as foaf:homepage is inverse-functional, it identifies him unambiguously. As timbl has <a href="http://dig.csail.mit.edu/breadcrumbs/node/72">stressed</a>, these links work in both directions.

Seems like the unsolved parts are how to publish such material in a publisher-friendly fashion, and deal with the stuff in the browser. One approach to the latter part is to make a browser that can understand native RDF formats - timbl's <a href="http://www.w3.org/2005/ajar/tab">Tabulator</a> and the recent <a href="http://sites.wiwiss.fu-berlin.de/suhl/bizer/ng4j/disco/">Disco Hyperdata Browser</a> being examples.

But this still doesn't address the publishing angle. I reckon you're right, XLink does seem like a good solution (especially since there is already a <a href="http://www.w3.org/TR/xlink2rdf/">XLink2RDF</a> mapping). One of those technologies that deserves revitalising.

Then there are microformats - the rel attribute is can go a long way (even if there isn't necessarily the thing/document indirection in the linkage, as is the case in <a href="http://gmpg.org/xfn/">XFN</a>, that can put in place when mapping to RDF). In the general case, there's always RDFa (which currently relies on XHTML 2.0, though it's been suggested that will change) and more immediately <a href="http://research.talis.com/2005/erdf/wiki/Main/RdfInHtml">eRDF</a> (which is a microformat-style expression of RDF in current HTML).

Any of these approaches to markup could be exploited in the browser (with GreaseMonkey or whatever), and with the <a href="http://www.w3.org/TR/grddl/">GRDDL</a> mechanism they're available for direct machine processing, aka services.

[link]

From: Henri Sivonen (Jan 21 2007, at 03:15)

I think what TheStreet does now is a very reasonable approach. It is compatible with any browser, it is understandable to users and it doesn’t cause a new UI problem for browsers. Why complicate it?

[link]

From: Ed Davies (Jan 21 2007, at 03:23)

"If I want to link to Accenture or Bob Dylan or Chartres Cathedral, I can think of three plausible ways: via the “official” sites, the Wikipedia entries, and Google searches for the names."

The semantic webbians would probably answer that there is a fourth plausible way: link to the object itself. Give the (or a) URI of the subject. This would be, of course, distinct from the URL of the subject's official site or Wikipedia entry though it might be related.

Or you could describe the object indirectly, e.g., in Turtle: [ foaf:isPrimaryTopicOf http://en.wikipedia.org/wiki/Cathedral_of_Chartres]. Is there a way to do that sort of thing with XLink? Maybe "Linking 2.0" needs it.

[link]

From: Sjoerd Visscher (Jan 21 2007, at 03:32)

I'd hate it if every link would show a menu where I have to choose. I'd prefer if you did that for me.

I agree with Hanan that something should be done with 404s. A browser could come with a service that suggests alternative links when it encounters a 404. This service would also be a single point of failure, but browsers are easily updated.

[link]

From: David Douglas (Jan 21 2007, at 04:50)

Two thoughts:

1. Going back to your academic paper analogy you could imagine that I don't embed the link, but embed metadata (probably looks like tags). These could be sent to something which returns links: a quick hack today could assemble the tags into a query to Google or del.icio.us and you'd just take the first link returned. You could imagine also adding in some other tags where you could indicated preferential URLs, like canadaline.ca, and other preferences to help decide from multi links.

2. This URL is another interesting one to think about including: http://web.archive.org/web/20060526233501/http://www.canadaline.ca/

[link]

From: Eric Jain (Jan 21 2007, at 05:34)

Linking things in general to Wikipedia, wouldn't that be a bit like starting to link all words to Dictionary.com? Why not just link to the official site, and leave it up to people to go check Wikipedia (or whatever their favorite reference site is)? This shouldn't a big deal, and I'd be surprised if there weren't browser plug-ins for looking up a term from a web page, or even showing relevant Wikipedia entries in a sidebar automatically.

Sometimes however there is no single official site for something you link to, and then it would indeed be great if there was a standard way to have multiple links. Listing all links can be too verbose in some situations, and DHTML menus are (IMHO) a waste of time and inevitably broken in some way...

Another case is where there are several identical mirror sites, and you need to list all of them so people can choose the nearest, or an alternative if their first choice happened to be down. Ideally all sites would handle such setups transparently, but this isn't trivial to set up (especially if you don't have the budget to pay Akamai :-).

[link]

From: Chris Brew (Jan 21 2007, at 08:59)

The first step is to work out why you want a link. OCLC and others have done

a lot of careful thinking about how to provide links that give you a pretty good

approximation to scholarly citations. The official goal of scholarly citations is

quality control (at least, that's how I see it, and what I tell students). But citation

links are also often used as tools for exploring. On the web, links are more for

curiosity and exploration, less as a means to maintain scientific standards.

Either way, you are linking because you want to give people access

to some of the information you relied on. And in linking, you feel yourself

to be somehow endorsing the information at the other end of the link (you

may not agree with it, but you do want people to notice it). The point is that

when the link breaks or (worse) starts to point to something else entirely, the

linker feels responsible, even though they had no control over what the linkee did.

So what you want is a way of bundling together a set of links that offer the reader

a decent approximation to the experience you want them to have. One way of

doing this would be to have a level where you can represent your goals in

creating the set.

- I want a link to the current stock price of Zippycorp

- I want a link to todays analysis of Zippycorp

- I want links to some recent related articles

- I want a permalink to my article on the total and irrevocable wonderfulness/error

of Zippycorp

then you marry this with a statement of how to fill the slots in this template. You

could say

- get the stock price from Yahoo finance

- get the analysis from CBS Marketwatch

- get the related articles from a search engine of your choice

- get my article from a specified PURL

This way would let you change your mind about how to fill the template, provide

alternates, and so on. Maybe looser than you want, but its a start.

[link]

From: Joe Duck (Jan 21 2007, at 09:43)

Excellent insights! I really like the link bundle concept. WikiPedia is great for many things but I think it fails rather dramatically for commercial queries. It's still very hard to find *the best* sites to use to find hotels in Vancouver. Many have hidden commercial agendas. Link bundling might help screen to "the top few sites" better than a Google or Yahoo searches, both of which could also be in the link bundle.

Hmmm - could you combine link bundle choices with tagging such that you'd get a sort of optimized link bundle over time?

[link]

From: Rob (Jan 21 2007, at 10:03)

More frightening than Wikipedia going off the rails is Google itself coming undone. Which it is showing signs of doing of late.

Heraclitus was onto this a very long time ago, I suppose. Links as a river you can never step in twice?

[link]

From: Daniel Haran (Jan 21 2007, at 10:42)

Wonderful. Wikipedia has indeed gotten some search engine mojo because of bloggers and others. It seems wikipedia has become the default namespace for bloggers.

The XLink spec seems fairly heavy. Wondering if there room for something that degrades well but can be implemented right away, I came up with this:

<a href="http://www.tbray.org/ongoing/misc/Tim" class="hLink" alt="Tim Bray's personal profile">Tim Bray

<a href="http://en.wikipedia.org/wiki/Tim_Bray" alt="Tim Bray on Wikipedia"></a></a>

wrote a blog about <a href="http://www.tbray.org/ongoing/When/200x/2007/01/20/On-Linking">linking</a>

<br/>

<br/>

&lt;a href="http://www.tbray.org/ongoing/misc/Tim" alt="Tim Bray's personal profile"&gt;Tim Bray<br/>

&lt;a href="http://en.wikipedia.org/wiki/Tim_Bray" alt="Tim Bray on Wikipedia"&gt;&lt;/a&gt;<br/>

wrote a post about &lt;a href="http://www.tbray.org/ongoing/When/200x/2007/01/20/On-Linking"&gt;Linking&lt;/a&gt;

Is this approach any good? Does it break anything? I realize it probably won't cover as much as the XLink spec, but it would already allow for some interesting uses. E.g.: Showing all link titles when hovering the main link, using the first live link (that doesn't return a 404).

[link]

From: Jim (Jan 21 2007, at 11:31)

Something I've long considered is incorporating archived backups into a web publishing framework. When you publish an article, the system pulls all the external links, tars them up and saves them on the server. If your link checker starts giving 404s it can substitute the page automatically, or if you decide the resource has changed beyond recognition, you can switch manually.

I've never really understood the desire behind multiple target links, from a user's point of view. In most cases, it's really bad writing to link to different resources with the same link text, even if they can be reasonably considered to be about the same topic.

Even if you accept the usefulness of multiple target links, I don't see how this solves the problem you are talking about. If something starts 404ing, the fact that you are linking to other websites doesn't really matter, users will still follow the link and get the 404. Sure, they can back up and select a different target, but they could do that with traditional multiple links too.

[link]

From: Tim Bray (Jan 21 2007, at 12:07)

Hey Rob, got a link for whatever Heraclitus said?

[link]

From: Danny (Jan 21 2007, at 13:27)

Sorry about the markup - your instructions are clearer on second reading!

As Justin pointed out, naming indirection isn't necessary (although I still have reservations about using http: URIs for things, the 303 HTTP Tax seems reasonable).

Apparently Heraclitus was known as "the Obscure" - I guess that explains why no link...

[link]

From: Peter van Kampen (Jan 21 2007, at 14:08)

"Πάντα xoῥεῖ καὶ οὐδὲν μένει."

Everything flows and nothing is left (unchanged). or Everything flows and nothing stands still.or "All things are in motion and nothing remains still."

http://en.wikipedia.org/wiki/Heraclitus

[link]

From: Rob (Jan 21 2007, at 19:33)

Um, I always figured pre-socratics figured in everyone's knowledge base. http://www.iep.utm.edu/h/heraclit.htm#H3 for the thought, Peter above gives you the guy himself at wikipedia, and here is some more wikiness: http://en.wikiquote.org/wiki/Heraclitus

From time to time thinking about the interwebs seems to intersect with epistemology http://pespmc1.vub.ac.be/EPISTEMI.html. I am not well informed on the former, and only lightly on the latter, so maybe I've gone up the river myself.

I think the key word is ephemera.

[link]

From: Ken Leebow (Jan 22 2007, at 07:39)

I'd argue that more than 90% of the Internet is crap and . . . the Internet is the world's largest garbage can and . . . the garbage is never taken out.

Now imagine what this trash can will look like in 10+ years. Wow, the stench will be overwhelming.

[link]

From: Danny (Jan 22 2007, at 09:38)

PS. I just ran across a special case, where the linked resource is mirrored. I was given http://citeseer.ist.psu.edu/denti97merging.html which 404s, but the exact same material is available at http://citeseer.csail.mit.edu/denti97merging.html (although the links from there appear to be borked...)

[link]

From: John Cowan (Jan 22 2007, at 11:47)

Citeseer is borked today, except for the Japanese mirror. This can't be helped by multilink schemes: you don't want to have to record in each link what the current set of citeseer mirrors is.

[link]

From: Alexandre Rafalovitch (Jan 22 2007, at 11:58)

Would the fact the Wikipedia has just made all outgoing links 'nofollow' changes your reasoning? To me, it makes pointing at the original website more appealing.

See: http://blog.outer-court.com/archive/2007-01-22-n21.html

XLinks would be nice too.

[link]

From: Doug Cutting (Jan 22 2007, at 12:19)

You could point to things through the <a href="http://www.archive.org/">Wayback Machine</a>. That way your text would continue to live in the revision of the web that it was written in.

[link]

From: Matt Laird (Jan 22 2007, at 12:36)

I know it's not the main focus of your post, but just a minor correction, the feds didn't give the largest chunk of cash. They actually gave one of the smallest along with the YVR Airport Authority.

It's called the Canada Line more for political reasons, similar to the motivations behind the sponsorship scandal and the "Confederation Bridge"; when the feds give money, any money, they want recognition, and this was done partially in the hope the province and GVRD can coax more money out of them in the future.

It's the same reason why right around the time RAV became the Canada Line, "Canada" logos began appearing on Skytrain. The feds gave money 20 years ago when the project was built, however the provincial government of the day snubbed giving them recognition at the time. With all this money coming in from the feds for RAV, they needed to very quickly correct this oversight and calm the turbulant waters.

Don't you love Canadian politics?

[link]

From: Adrian (Jan 22 2007, at 14:31)

>The question is, which to link to?

Easy: the wikipedia article links to http://www.canadaline.ca/, www.canadaline.ca doesn't link to wikipedia.

[link]

From: roberthahn (Jan 22 2007, at 16:02)

I haven't clicked on all the links that appears in this thread, so perhaps it's been covered already, but if you really want a link bundle, what's wrong with doing something like this: from your article, link to a 'link page' which lists as many jump-off points as you want. Also: consider returning a status 300 with the link list page.

[link]

From: Jay Fienberg (Jan 22 2007, at 18:42)

One option that didn't seem to get mentioned is creating your own URIs for things you want to link to, and then linking to your own, un-fragile URIs.

For example, you might create your own "Canada links" page, and just keep adding blocks of links (html block elements that contain A and/or IMG elements) that are identifiable with a URI, e.g., canada-links.html#canada-line.

Some of the features of Xlink and earlier SGML link schemes might be thought of as a version of this "link to a block of links" citation technique.

[link]

From: pri@ddf.dk (Jan 23 2007, at 05:37)

Tim Bray poses a fairly open question. What constitutes a solution depends a lot on circumstances. Cf. this definition of obscenity: "I know it when I see it".

http://laws.findlaw.com/us/378/184.html

What constitutes an appropriate resolution of a link in one context might be obscene, err inappropriate, in another context.

Common elements of the solutions that I know of:

- a taxonomy or controlled vocabulary

- a resolution service with a flexible API

- some way of interacting with the sentient being in front of the display

* Examples of an implementation of HyTime/XLink-style multiple-arched links:

http://doi.contentdirections.com/mr/cdi.jsp?doi=10.1220/misc2

http://www.crossref.org/mr/mr_main.html

* Published Subject Identifiers is designed to point to concepts in an unambiguous way:

http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tm-pubsubj

http://www.altheim.com/bunny/

* OpenURL is designed to resolve a link to 'an appropriate resources':

http://www.niso.org/standards/standard_detail.cfm?std_id=783

* Librarians have been using authority files and controlled vocabularies for centuries, and code lists are crucial for business:

http://www.oclc.org/research/projects/viaf/

http://www.collectionscanada.ca/8/4/r4-280-e.html

http://docs.oasis-open.org/ubl/os-UBL-2.0/UBL-2.0.html#d0e3475

* And folksonomies are gaining ground:

http://del.icio.us/

* Conceptual linking isn't exactly blue ocean, research-wise:

http://citeseer.comp.nus.edu/408262.html

kind regards

Peter Ring

[link]

From: Andy K (Jan 24 2007, at 03:17)

Fascinating thread, and I have many thoughts on these issues.

About which site to link: I say the one that most represents the viewpoint you want to take. So if you're a Canadaline proponent, you link to their website; if you're indifferent, you link to Wikipedia; if you're an opponent, you link to some website that argues why it's so bad. As suggested, if any link seems likely to be transitory, link to the webarchive (and donate to them for good measure).

About scholarly citations: they are just pointers to text that agrees with you (in a rigorous framework, but that's just context, which is arbitrary). It doesn't matter that they are aggregate, that's just the way to find the precise agreement, an address format for a different "network." And I disagree that they are different from a URL on the web. Libraries are single points of failure just as much as Wikipedia or the Internet Archive (think Alexandria). And you don't have to burn down a library, books are rotting with time. Anyways, the problem sets will merge when all printed matter known to exist has been digitized.

About fragile links and 404s: What's the big deal? The browser is software, let it deal with them. Invent some Firefox plugin that pings and prefetches and the UI will eventually settle on something that most people agree is a sensible way to visualize a known broken link. If it gets adopted and drives bandwidth through the roof, someone will invent a 404 detection protocol.

About linking to the object themselves: that's just silly or I'm old-fashioned.

About indirection: Invent meta links such as <a type="Search" term="Chartres Cathedral" defsite="google.com" numresults="20">...</a> and <a type="reference" term="Chartres Cathedral" defsite="wikipedia.org" numresults="1">...</a>. Then invent a Firefox plugin that handles these things and allows users to set their preferences for each type. Of course you'll need to keep a valid href in them until this thing catches on.

About linking 2.0: I've often wanted this myself, orginally for allowing multiple file downloads without zipping (or maybe so the software can do the zipping and unzipping for me), but it would work for your aggregate references (or mirrors or whatever). Again, just make a Firefox plugin that reads the user's preferences and deals with the links accordingly. I guess you'll have to invent some UI for it, someone suggested a drop down menu, but being open source, it'll converge to something reasonable once adopted (<incredulouslaugh>bwaaa-ha-ha-ha</>).

If Tim's request for Heraclitus links was tongue-in-cheek, Rob's reply with a Google feeling lucky link, a Wikipedia link, and a 404 was right on. Well played.

PS: Please forgive me if wanting to change web paradigms by writing Firefox plug-ins is naive, I haven't tried it myself.

[link]

From: Stephen (Jan 28 2007, at 05:18)

I looked pretty hard at the concept of using JavaScript to convert XLink-annotated hyperlinks into multiple destination links ... but IMHO it's not really feasible in the short term.

The big problem is that most HTML browsers either render unknown tags as plain text or drop the text content completely. This makes most XLink markup look an ugly mess unless we guarantee that a working JavaScript implementation will be there to pull out the tags and replace it with nicely-formatted HTML. Another big problem is the almost complete lack of support for namespaced attributes in browsers.

My proposal (http://guruj.net/node/44) would be for a simple <span class="multiHref" /> wrapper with some JavaScript to convert multiple links into a single link with dropdown on compatible browsers.

[link]

From: T.J. Hart (Jan 29 2007, at 05:21)

If you want a link to last forever, why can't you link your information to a site by time. I know that there are web archives and you can watch pages change based on the passage of time. Is there no way to set up a link just to look at a page on a certain date?

Please keep writing. Most interesting. Not that I agree with everything you say, but you do make me think. Thank you so much.

[link]

From: Michael @ SEOG (Jan 30 2007, at 22:10)

Another thing to consider is the ranking effects with linking to the original site vs wikipedia. In most cases, wikipedia really does not need any additional Google juice, but maybe that original site does -- especially when you didn't just find it in Google. Since search engines rate relevancy and natural links so highly, providing contextual links to content you feel is worthy is something that helps to benefit the original content writer and aid their site in attaining more visibility.

Wikipedia will always gain a fair number of links on its own, but it also important to point to sites that may not be as sophisticated in their own promotion efforts -- as Google wants everyone to do, "reward good content."

[link]

author · Dad
colophon · rights
picture of the day
January 20, 2007
· Technology (90 fragments)
· · Web (396 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!