FrontPage SiteMap RecentChanges HowTo Blog

Matching Pages:

RSS

Uzbekistan, National Day, Slovakia, Constitution Day

LocalNames

Local Names is a project to use wiki-like linking just about everywhere.

Imagine: You’re reading a web page you like, such as: http://slashdot.org/

You are going to talk about it in your blog, so you click a bookmarklet, and name the page, “SlashDot.”

Then you go to your blog, or your favorite wiki, or are writing a comment somewhere, and you say, “I was just looking at SlashDot the other day, and I saw this idea, …”

When you hit “Save” (or “submit” or “comment”), your blog automatically hyperlinks “SlashDot” to http://slashdot.org/ for you.

How Can This Work?

When you clicked the bookmarklet, it made a note that “SlashDotshould point tohttp://slashdot.org/”.

It made that note in your “namespace.”

Your blog has a Local Names plugin, and you’ve configured it to look at your namespace. It sees that “SlashDot” was previously bound to http://slashdot.org/ . So, it links it for you.

The part of the plugin that knows how to read a namespace, is called a “resolver.” (per software package, per website, per WikiEngine, …)

Local Names in Wiki

Wiki is half-way there, because wiki already have resolvers and namespaces built in:

For LocalNames to work in a PHP BB, or in blog software, you have to do some plug-in work.

You also have to do some plug-in work, if you want your wiki to link outside of it’s own local namespace.

Summary So Far

So we have:

Got it so far?

NamespaceDescription

We’d like to use names from one place (say, a wiki) in another place (say, in a Slashdot comment.) How is this going to work?

Somehow, Slashdot has to know to look at the wiki’s namespace, and it has to know how to understand how to decipher that namespace.

The simplest format we can think of for doing this, is to just use a normal web page.

Interpret every link on a web page as a local name definition- “this link text is the name, and it points to the linked URL.”

So the CommunityWiki PageIndex is a namespace definition. Specifically, it is the namespace definition for CommunityWiki.

A wiki’s page index consists of the name of each page on the wiki, hyperlinked to the page of that name. What you do, is you strip out all the hyperlinks from the page, and make those your name-url bindings.

ie:

 blah blah blah <a href="http://example.net/">example</a> blah blah blah

…establishes that “example” should link to http://example.net/ .

I like it a lot, it makes it much more flexible, you could use practically anything on the web as your source of names: tags, dictionaries, web searches, directory listings, blogs. It’s so simple!RadomirDopieralski

There are web glossaries, such as: “Theories About Conforming.” It’s a rich source of reusable names.

:question: “What if there are several links to different urls with the same link text, like the dreaded “more” links?”RadomirDopieralski

Answer: Ideally, you’d be using a page that was intentionally made for doing this. Sub-ideally, we have to tolerate the dreaded “more” links, or manually cull them out.

Yes, Oddmuse’s implementation will use the first more link. But luckily we’re not linking all words, so this is only a problem if you do in fact link to [[more]] which is rare. So effectively we’re just wasting some space, not corrupting the usability. Similarly, if you screen scrape a wiki’s page index, there will be some useless entries from the header and footer. It’s nice if wikis provide a dedicated list of links, such as these:

But it is not necessary at all. If you look at the NearMap, you can check the result of screen scraping Meatball’s index:

Problems

One problem is loops. Assume, for example, the following sites:

So why can’t the wiki just test for loops?

If you check the following links, you’ll know why:

  1. http://communitywiki.org/en/SandBox
  2. http://communitywiki.org/de/SandBox
  3. http://communitywiki.org/odd/SandBox

If you check the links, you’ll notice that A and B point to the same page using a different interface (English and German), whereas C points to a different wiki altogether. So assume that #1 is site A from the example above, and the definition for TEST on site C could be pointing to #2 or #3 – if it points to #2 we have an infinite loop; if it points to #3 we don’t. I don’t think it’s possible to build a system that will automatically do the right thing. Solutions will always be brittle.

Other Ideas

See Also

CategoryLinking

Discussion

Making Sense of Different Sorts of Links

Well, InterLinks are explicit: MeatBall:AlexSchroeder points to the AlexSchroeder page on Meatball because I said so. AssumeGoodFaith points to a page on Meatball wiki as well, but not because I said so – instead, a little search strategy is used: First, we check whether the page exists here, then we go through the NearMap and check whether the page exists on any of those wikis, and if it does, we link there. Should we one day create a local copy of the page, the link will automatically point to the local page, since the search strategy looks at local pages first.

Lion’s system formalizes this, and provides a generic framework where you can link names to URLs in a namespace, and delegate the lookup to other namespaces if no local definition exists.

Thus:

Lion’s Local NamesOddmuse Local Names
NamePage
URLURL of the page
Definition of a nameLocal page exists
Delegation to a remote namespaceList remote wiki on the NearMap

I’m sure we could make a better comparison than the table above, but it’s a quick start to see how the Oddmuse:Near Links compare to Lion's Local Names. And I think that makes it pretty clear that LocalNames are not the same as InterLinks.

If I understand correctly:

Hm;..

LocalNames is not the same sort of thing as InterLink or NearLink.

InterLink and NearLink are specific name resolution strategies.

Some wiki, and the Local Names system, support those two strategies. 1

If a wiki were to natively support LocalNames, and made use of the traditional query resolution style, it would automatically get InterLink and NearLink as part of the deal.

But there is no one resolution process that is Local Names.

Where you are right:

  • namespace pages – Local Names relies on namespace descriptions; – they can be produced by wiki, or produced by a wrapper around a wiki, written by hand, created with a tool, etc., etc., etc.,.
  • FutureLink” – actually, it’s not called “Future Link” in Local Names; it’s called "X FINAL" instead – that’s where a name goes if there’s no suitable binding.

Things you missed:

  • “this wiki” – there is no check for “this wiki” first, because there’s no such idea as “this wiki” in Local Names; if a wiki were to use Local Names, it would likely just produce it’s namespace description, and likely use that as the origin of it’s local names queries
  • NearLink – Local Names supports “near links” – if you query a namespace, by the “traditional” query style, then that implies a search of all neighboring namespaces as well (1 level deep)

Another thing to note about Local Names:

  • There’s no one specific way to look up a name, in Local Names.

Query servers can support multiple ways of looking up names. They should all have a “default” way, though, and they should all support the “traditional” lookup method. (Which is basically: 1. Look Here. 2. Look all Around. 3. Use X FINAL.)

This is so that the system can support all the different ways of looking up names: Do you want to be picky about foreign characters? Do you want to be picky about spaces? Do you want to be picky about capitalization? Do you want to search deep through the namespace web? Do you only want to search a single namespace, and nothing else, while still being able to perform namespace hopping? And so on.

Rather than construct an entire constraint based query language, I decided: “I’ll just let people name styles of lookup, and hand-code them into their query servers. People are going to be using traditional most of the time, anyways.”

local names- There is the scent of hearing somebody scatching. As if we dug tunnels towards each other. Local names and wiki-net. As if somebody scratches not far away and the tunnels are to connect soon. Just a September-scent.

InterLink, NearLink, LocalNames are for better connection and that’s fine. but i’m missing the backlink (another word for re-ligion (and this shows us how important it is)) . in my opinion each side has to have a main backlink (rc-tree rudiments) and a list of (perhaps weighted) backlinks (rhizome) .

see also BalkanizedWeb

Mattis- I feel the image too.

I think it’s the desire to connect ideas with other people. Local Names is a step forward for mental contact.

I’d like near-universal spam-proof moderated BackLinks as well; I love how OddMuse implements it with referrers, for example.


If anyone would like to make use of this method in a tool or something, please let me know; I’m eager to help make it happen.

In fact, I’d give 100 points on the InternetExchange, for the opportunity to do so. :) I really want to see this work.

I’ve been trying out on local names a bit lately.

Mighty cool thing that!


The community-wiki imports the local names from several wiki-hives:

See the CommunityLocalNames

It would be nice to get a link like [[kabo-wiki-hive_-_local-names-wiki_-_talk?]] to work on community-wiki as well (http://socialsynergyweb.com/cgi-bin/wiki2/LocalNames/Talk). Within and across the above wiki-hives local page names do already work.

Allright, it’s CommunityLocalNames here, not LocalNames. It looks like I can link to a good deal of wiki pages in the wiki-net hives in a “clean” way now. Lemme try some:

  • [[obm-wiki-hive_-_world-democracy-wiki_-_wiki-net,_explained?]]
  • [[kabo-wiki-hive_-_kabo-wiki-center-en_-_talk?]]
  • [[kabo-wiki-hive_-_kabo-wiki-center-en_-_edit_the_day-page_talk_for_today?]]
  • [[eArt-wiki-Nest_-_Peinlich-wiki_-_lokale_Namen?]] [de]
  • [[kabo-wiki-hive_-_local-names-wiki_-_talk?]]

Testing some more I added on CommunityLocalNames. It’s local names because every wiki cares only for its own local names. Care for your local names at home and use all local names yours included all over.

[[link_language?]], [[community-wiki_-_link_language?]], [[hive_mind?]], [[community-wiki_-_hive_mind?]], [[mind_the_gap?]], [[community-wiki_-_mind_the_gap?]]

Hm, the clean cw page names do not yet work on [[kabo-wiki-hive_-_local-names-wiki_-_talk?]] 2008-10-30 (no clean page names for day-pages yet). But maybe the will when I wait a bit. … hm.

The phrases link language, community-wiki, hive mind and mind the gap are LinkLanguage examples that link in our minds.

Once a person or group defines a Name Locally, their brains link it to that definition regardless of case or spurious punctuation such as extra []’s, .’s, :’s, ;’s, -’s, _’s, +’s, %20’s, etc.

WikiSoftware should be just as loose.

(re: Loops)

Ah, … so the situation is that you (A) link to someone (C) who links back to you (A), …

The problem seems endemic to redirects, period. Like email autoreplies in a war with one another.

I think the cheap solution would be to just point, and not try to resolve the chain. That is, if you are resolving a page on (A), then point it to (C). Leave it to the web browser (rather than the page renderer) to try and resolve the infinite loop.

The deeper solution would be – as you resolve the target, keep a list of the chain so far. When you get to a repetition, realize, “This is looping.” The only way it will go on forever is if the URLs generated are always different. (Which I could imagine – if unique random id’s are part of generated page URLs, or something.) But if it’s just /en and /de, you will exaust the supply at some point.

Something like:

 def redirect(url):
     """Return the Location: url that the received url redirects to.
     
     Returns None if it does not redirect.
     """
     ...(code goes here)...
 
  def resolve_url(url, L=None)
     if L is None:
         L=list()  # first time through?  start w/ an empty list
     if url in L:
         return "infinite loop"
     next_url = redirect(url)
     if next_url is None:
         return url
     else:
         L.append(url)
         return resolve(next_url, L)

That would work. Since HTTP is stateless, and the wiki software we use (Oddmuse) is also stateless, I’d have to store the URLs visited in a cookie. That would only work if all the sites in the loop use the same software. Or I’d have to add state to the software.

Luckily there’s a cheaper alternative. The software has a SurgeProtector and it kicked in when I ran into the problem: I followed the Meatball link, load load load load load load load load load load (in other words, Firefox will not detect a loop), and I got the message Please do not fetch more than 10 pages in 20 seconds.

I think what we’re seeing is that the importing of entire namespaces instead of manually defining single names one after another results in a brittle system. We cannot control what remote sites will define, so other people will set up infinite loops for us.

Maybe this is only ever a real problem if using LocalNames and NearLinks on the same site?

I don’t understand how it is that NearLinks and LocalNames together are the deadly combo.

If the browser is fast enough, it may even be something like: “Please do not fetch more than 2 pages in a single second,” to sufficiently capture the condition.

When I wrote up the problem description, I had to invoke both NearLinks and LocalNames to describe the problem. But you are right, you can setup up loops using just one of these technologies. The key is that you are setting up an external redirect. Once you can do that, you can set up loops.

Importing tons of external redirects via NearLinks or LocalNames just enable you to shoot yourself in the foot using an automated rifle. X-)

You might modify the SurgeProtector to handle the loop special case – really, people shouldn’t be fetching the same page more than twice (once??) within the 20-second time frame, so the code could check for a repeat-fetch on a single page and report it as a likely loop.

Yes, tweaking the SurgeProtector is probably the simplest way to break a redirect loop.

Another way to fix the problem: Avoid literal HTTP 301 redirects entirely.

  • Site A is missing the page TEST
  • The NearMap on A includes site B
  • Site B defines the page TEST
  • The page SillyName? on site A has a wiki-linked phrase “TEST”
  • When a user asks for page SillyName?, the rendering engine at A starts sending that page to the user. When it gets to the wiki-linked phrase “TEST, it first checks to see if “TEST” exists locally at A. Nope. Then it checks to see if “TEST” exists at B. Yes – so it renders that phrase “TEST” as a direct HTML link to page “TEST” at B. (If all the checks come back “No”, then the rendering engine puts the little question mark by it).
  • When the user clicks on “TEST”, he now goes directly to page “TEST” at B. :ok:
  • C is a site that exports a LocalName for TEST pointing to A X-|
  • When the next user looks at page SillyName?, the rendering engine at A now uses that local name, and creates a direct HTML link to (nonexistent) page “TEST” on A.
  • After that user clicks on that link, the rendering engine at A sees that page “TEST” doesn’t exist … :-(
  • … But A doesn’t make a HTTP 301 redirect …
  • … Instead, A spits out a manufactured HTML page – a custom HTTP 404 page.
  • That page says something like “Sorry! There is no page TEST here at A. To create page TEST here at A, do this … You may be looking for page TEST over at B. … C tells me that there is a rumour going around that there exists a TEST page at this other URL …”. <3

The user could manually click on that last URL (that says it came from C) over and over again, and see that same 404 page (from A) over and over again. That would be confusing, but not disastrous.

That 404 page should give the user enough information to fix that confusion: (a) go to C, and fix their incorrect URL for “TEST” to point to a good URL; or (b) Remove C from A’s list of places to check for local names, and forget all the local names C already gave us, until C stops giving A nonsense; or (c) create the page “TEST” at A.

Hmm… I think this boils down to “don’t do external redirects if the request is not from the local site.” That makes sense. A will only redirect to B if the user followed a link from A. If the user followed a link from C and ended up on the non-existing page on A, then he’s not redirected any further.

Essentially this is what internal redirection using #REDIRECT already does: It passes along a parameter when it redirects. As it redirects, it checks for the absence of said parameter so that it will only ever redirect once.

I like the idea!

Should drop in this link, to an archive.org version of Lion’s old website, BayleShanks expressed interest in getting the file from LionKimbro to host it again at the last RecentChangesCamp in Montreal. http://LocalNames.info

Footnotes:

1. As for Local Names, the strategies are completely defined in the “Lookup Path” and “Traditional Query Styles,” in the Local Names XML-RPC Query Interface specification.

Define external redirect: community-wiki - mind the gap community-wiki - link language obm-wiki-hive - world-democracy-wiki - wiki-net, explained hive mind kabo-wiki-hive - kabo-wiki-center-en - edit the day-page talk for today kabo-wiki-hive - local-names-wiki - talk NamespacePage community-wiki - hive mind eArt-wiki-Nest - Peinlich-wiki - lokale Namen SillyName kabo-wiki-hive - kabo-wiki-center-en - talk mind the gap link language

EditNearLinks: CamelCase WikiSoftware FutureLink WikiEngine ThisWiki SlashDot

Languages: