DecentralizedWiki

Soon this page will be merged into OfflineWiki.

I mean the server. That you can chop it in two with a fire-axe. A decentralized wiki can’t be switched off unless you switch off all the computers it runs on. The idea is in freenet. But freenet makes all this encryption stuff. We’d want to even be able to read what’s temporarily stored on the 150 MB we dedided to use for shared serverload on our harddisks. I talked to Ludovic Dubost, x-wiki about that a while ago. Links and such in that direction apprechiated. :)

(cough cough) DistributedEditing, DistributingWiki, WikiFeatures:FailSafeWiki, PeerToPeerWiki, OfflineWiki, MoinMoin:OfflineWiki, …

You might want to search around for DRCS, or “Distributed Revision Control System.” There are other architectures you can use to get more or less what you want. There are cheasy ways to do it, which are just fine as well. I’m just trying to give you something to gnaw on, think about, since I can tell how interested you are in this subject.

I suspect there’s a more official name than DRCS; Wikipedia entry on the concept, and google is returning only a handful of relevant links. Here’s one: The Purpose of Distributed Revision Control Systems.

There are sovereignty type issues. People want to feel secure in their system, people need to not be carrying aroundn loads of spam on their systems, reproducing it all across the Internet, people need to be able to change their software and their system. It’s complicated, but not too complicated; I suspect there are already some distributing wiki out there, and, if not, will likely be a few within a couple years.

The question in my mind is always: Why is this feature so important? Why is this feature more important than, say, an SVG wiki, which is immanently doable, and would likely dramatically change the face of wiki?

Do you know that this feature, if you try and implement it today, would dramatically slow down the implementation of wiki features? Yah! Because: You have to roll out the rendering technology to all the other redundant wiki..! Yeah! Server A goes down, at Alex’s house, and Server B is now being hit, at Mattis’ house. Except Mattis’ software is a year out of date, and so pages aren’t rendering right, and things are going goofy. We could have Alex’s computer try to teach Mattis’ computer about how to render things, but then there are security problems (the usual automatic update issues,) and there are platform configuration issues (folders in different locations, doohicky’s that weren’t installed, …) And before Alex can safely put something “on the grid,” or “on the network,” he’s got to run a trial test simulation on a virtual network to make sure that the feature will spread out right, over the network. This is while he’s implementing a feature, right?

I mean, right now, Alex wants to implement a feature, he just writes it into the code, renders a page, sees how it turns out. I’m guessing Alex either works directly on the CommunityWiki, or he works on a test Oddmuse to see how it goes, and then when he’s done, he puts it on the live server. But now if we’re distributing to hoards of computers, Alex probably has to think a bit more about what features he uses, what your platform looks like, how the system is going to react, etc., etc., etc.,. It’s more things he has to think about which means more calculations he needs to perform in his head which means less he can keep in his head, which at the end of the day means: Features are implemented slower.

Sad day. :(

Until we have a general component technology that allows Alex to ship out things and distribute them safely over a grid, this is a freezing technology you’re looking for. “Freezing,” as in it slows everything down.

So the people who are really working on what you’re talking about, are all working on weirdo things with abstract names that don’t include “wiki” in them, and attending conferences with “component” and “Java” in the title, and the rest of us are choosing not to touch it with a ten foot stick.

There are some people playing with some cheesy systems for doing a light form of what you’re talking about, and some of the same dynamics play out. Those are the sorts of things we could work with. But I don’t think it’ll be generally applicable, and I don’t think it will be very beneficial, and I’m not quite sure, really, just what the benefit is, and why it would be so important.

I mean: It’s going to be an awesome thing, when we have a sort of “Internet of Components,” where everyone’s computer is a server, whenever and wherever it happens to be attached, and you can say just what sort of data/processes you want to be hosting, and it’s all redundant in triplicate and distributed to meet demand, and processes are relocating themselves physically based on when they’re needed and by whom and what not. But that’s a general problem, and we here are so totally not in that business.

I’d look at Wikipedia: Grid computing, as a starting point.

I’m feeling heavy handed today; I must just be in a bad mood, or something.

What if the wiki raw pages were distributed, and were just accessed as blobs, and rendered via a local WikiEngine? They’d all need to use the same WikiText? format, and you’d probably lose the potential for MetaData (unless the overall page were handled as something like XML, with the body of the page being a single node of WikiText? within it), but that might be more LooselyCoupled? than what you’re talking about above… http://webseitz.fluxent.com/wiki/z2005-10-27-LocalWikiAppCentralContentBase

Do you think there would be enough interest in a public archive of all the plain text pages of this wiki, for example? No history, no change log entries, but read-only access via CVS? That should be very easy to do.

I believe that Alex’s suggstio is excellent! As an example of my thinking …

This is very much in my mind as I experiment with TiddlyWiki. Note that it automatically provides many of the functions that Alex was prepared to sacrifice in order to achieve ease of implementation. Effectively, all I have to do is copy the individual tiddlers into my TiddlyWiki file in order to wrap the microContent with the particular version of wiki (complete with one of many possible CSS presentations) that I want to use to present them to a particular audience. Since everything in such a self contained file runs client-side, I can safely publish such a mini-wiki by simply making it available in ReadOnly file space available via the ‘net. People can help themselves to whatever they want and re-wrap it to taste. A slightly more elegant implemntation would be to add a macro to the javaScript code that would act somewhat like an InterWiki facility or that could use HTTPGET to just pull in tiddlers from other reliable sources. Reliability could be achived by means of an INCLUDE path that would let my client cascade through its preferred mirror sites. Security could be achived by simply incorporating Certificates from ‘publishers’, which is also trivail to do.

Since all of this seems to be running parallel to the comments posted by BillSeitz, I’m now planning to investigate the links he was good enough to provide during the holidays.

Thanks lion for links and explanations. I needed that roar! I understand why it would, implemented now, slow down everything. It is not supposed to be implemented now but thought about. This hive-mind-thingy made up a brilliant open source operation system and receintly a mighty good encyclopedia, what next? There is no control. The hive-mind comes up next only with what the hive-mind decides. And who tells us that this isn’t something that will be heavily opposed by “?” (nations, governements, monetary circles, bla, what or whome ever)? It’s a normal and healthy thinking about security and reducing vulnurability that makes me come up with it, I assume. I whish we could do without, but I fear we won’t. Not on the long run.

I’m looking forward to understand about the proposals that came up above. Communitiy-wiki’s reactions are surprising to me. I’m interested in the lady, know?

Concerning spam in a freenet-like system. Contents that is requested is copied on every node that is in between the donor and the receptor. The oldest files never requested get deleted on these nodes to make space for the new file. Spam = requested?

Funny thing: the more distant, physically, collaborants are the more nodes their communication gets copied on = the less it is forgotten by the system. Think global, hehe.

BillSeitzBayleShanks calls it a WikiWindow.

HansWobbe – You could publish by dispatching POSTs, s well. This is then, basically, a single page “Ajax Wiki.” I need to give this more thought. The live Internet-based real-time single-document editors will probably shortly become wiki, I imagine.

AlexSchroeder – I wish I could do an SVN pull, perform client-side editing, and then perform commits, to insert my changes. I am a much better writer in EMACS, I think, and it would mean that I could edit the CommunityWiki on the bus. (Where I spend 2 hours of undistracted time, per weekday!) You’d learn to fear the amount of content I added.


MattisManzel:)
lol! – MarkDilley

    svn checkout svn://communitywiki.org/raw

Currently the entire system is not event-based. Thus, a suboptimal winner-take all method is being used at the moment:

  1. The server has a copy of the raw pages in the working directory.
  2. svn update runs
  3. The timestamps of all files in the working directory is compared with the timestamps of the pages. If a file is newer (ie. the last change was made via svn), it is posted to the wiki.
  4. The raw pages are saved in the working directory.
  5. svn commit runs

Todo:

  1. Set up svnserver. ✓
  2. Set up cron job to check all of the wiki into svn. ✓
  3. New pages are not added to svn! ✓
  4. Old pages are not deleted from svn! ✓
  5. Provide write access to the repository. Mail me for an account. ✓
  6. Add authentication for svn users. ✓
  7. When checking in via svn, immediately post to the wiki. ✓
  8. When posting to the wiki, immediately check into svn.

Here’s what I did:

    svn checkout svn://communitywiki.org/raw CommunityWiki

This will check out the plain text files into a directory called CommunityWiki. In it, there will be a directory called trunk with all the text files. Note that I was not able to get this to work correctly on a Mac, because there’s the typical NFC vs. NFD problem with UTF-8 encoded filenames, and there’s page duplicates due to case:

svn ls svn://communitywiki.org/raw/trunk | tr "[:upper:]" "[:lower:]" | sort | uniq -d | wc -l
4

Those will map to the same filename on HFS+ file systems.

This ting about a wiki you can not switch off anymore keeps coming up in my mind over and over again. There is no wiki in freenet up to now. s23-wiki: freenet

Define external redirect: WikiText LooselyCoupled

EditNearLinks: PeerToPeerWiki WikiEngine

Languages: