DistributingWiki

Soon DistributedEditing will be merged into this page. Soon 2007-02-08 will also be merged into this page.

distributing with automatic forking

When you have implemented NearLinks, people following a link to the non-existing local page X will be redirected to a near page X on a related wiki. The local wiki can keep count of these redirects, and pull popular pages from remote wikis to the local wiki, thereby distributing popular pages.

This InterWiki idea runs against OnceAndOnlyOnce. Unlike FreeNet [1], this would not only copy popular pages, it would also fork them automatically, since the current description doesn’t say that every page (revision) has a unique key to identify it. We could use the URL of the original page as the key, but implementing all of the extras (editing the page, for example), would require a lot more thought.

Watch out for CopyrightTraps, however. If the licensing terms of the remote wiki impose restrictions on copying and editing, your implementation will have to take care of this.

One technical solution would be automatic licensing negotiation. It is probably simpler to just give wiki administrators the power to enable and disable the pulling-feature on a wiki-by-wiki basis, so that you can keep non-cooperating wikis on your NearMap.

This is also known as DistributedWiki See also: DistributedEditing, CommunityRepository

Discussion

The idea above relies upon identifying popular content and then distributing it. The approach detailed above could essentially be a form of “caching with write” mechanism - and in that case you hit the same issues that web caches hit - that is most pages are normally hit once, but any page accessed at least 2 or 3 times is often hit lots more than 2 or 3 times. – MichaelSparks?

Perhaps implementing only the first half of the idea would already be useful: the cache mentioned at DistributedEditing.

blue sky

FailureTolerantWiki? / FaultTolerantWiki? : In an ideal world, a “single” page would be distributed among computers on several continents. If any one of those computers were suddenly chopped into little pieces with a fire axe, all the content of that page would still be available and editable as normal. Normal users would not notice anything different.

It seems to me that given any possible method of implementing DistributedEditing, it would only take one more tiny step to make the wiki FailureTolerant?.

DavidCary

See WikiFeatures:FailSafeWiki.

Discussion

I feel this page should be merged with DistributedEditing. We should also write DigitalRightsManagementForWiki?, which is a different subject.

If there’s a genuine difference between this page and DistributedEditing, I would want to highlight the difference in the page name.

I like fault tolerant wiki; That’s something we definitely want to do.

But I’m generally skeptical of efforts to distribute wiki.

It seems to me like: A wiki has a community around it. They set things up like they like.

The boundaries between wiki should be clear.

Why distribute, then?

If you’re worried about the sanctity of the PageDatabase, then just use redundancy: FailSafeWiki?.

Why do you need anything else, anything more?

I feel as if there is an insensitivity to the difference between communities here.

In other domains than wiki, distributing is a way to create other instances of the same material, for the sake of keeping said material available even if servers, or parts of the entire net go down. For example, last night CW was down for several hours, and nobody could work on it. If it had been distributed, one server going down wouldn’t have been a problem at all.

I don’t see how well this would work with wikis, however, because of what lion says about communities being attached to wikis. It’s easy enough to distribute material that doesn’t change, but communities edit their wikis, tone and LinkLanguage can differ a lot between different communities. Offhand, I can’t see how to do this well (which might just be a failure of my imagination, smile, and any moment now somebody will show me the way).

I can understand how to make a failsafe for wiki.

Just distributing individual pages between wiki or something; I don’t think that’s the way to go. Again, different LinkLanguage, different cultures, different audiences, different everything.

But failsafe is exactly what we want. That’d be awesome. We’ll get there.

We could make three distributed Wikis (i.e. simply on different sites) that attempt to keep synchronised, merging updates; if a desync occurs, all three Wikis remember who did what changes, and merge them again when connections are restored. That way, if one goes down, we simply start using another. (This could also be used to synchronise parts of a Wiki, if desired: my usual pair, DividedCommons and UnifiedCommons, applied to updates.) Probably create havoc occasionally, but that’s part of the fun.

Well, we could start with a primite cron job pulling a copy of the community wiki tarball, extracting it, and making it public. That would be our life “backup”. We can start improving once we have the basics in place. Might be an interesting learning experience.

Using an IntComm:EventSystem?, you can automatically repost (to mirrors) whenever somone posts something to a wiki.

At Wikimania 2005 WardCunningham ended his presentation with a “dream” of his using pages being copied and modified from host to host, in a distributed network. Not every host was carrying faithful copies. Copies would get lost and changed. I think he didn’t really think it through. But at least he shares the goal. :)

in Wikipedia:Erlang programming language, FaultTolerance? is achieved in part by ImmutableVariables?. Each variable may have only one value across all nodes and processes. The same principle could be applied to WikiPages in a distributed wiki. There is only one page everywhere , although it is also editable everywhere. Thus, everywhere that the page lives is part of the community. This would likely need some other social negotiation and rules to work within a EcosystemOfNetworks scale.

Talking to xorAxAx on IRC, I realized that it should be possible to write a Perl script that keeps two Oddmuse wikis in sync. It would generate multiple hits per page of the wikis in question, but it would be able to resolve edit conflicts if and only if old revisions had not yet expired (by default they expire after two weeks). The only drawback is that consecutive edits on the source wiki get collapsed to a single edit on the target wiki.


See also DistributedEditing, OfflineWiki, MeatBall:PeerToPeerWiki

CategoryInterWiki?

Define external redirect: ImmutableVariables DigitalRightsManagementForWiki FaultTolerantWiki FaultTolerance CategoryInterWiki FailureTolerant FailSafeWiki MichaelSparks EventSystem FailureTolerantWiki

EditNearLinks: WikiPages WardCunningham PageDatabase FreeNet

Languages: