2005-02-05

In case you don't see it on my blog; I'm upgrading to MoinMoin 1.3.

It's not an easy upgrade; I have to run through 9 migration scripts; per wiki, in addition to a ton of other changes.

I am presently running 30 wiki. I believe I'll be cutting a bunch of wiki with this upgrade, and will finish with a mere 10-15.

I'm keeping:

  • aware.wiki.taoriver.net
  • cafoscari.wiki.taoriver.net – unless MattisManzel tells me it can go? is that wiki in use?
  • flashmob.wiki.taoriver.net – unless MattisManzel tells me it can go? is that wiki in use?
  • freegames.wiki.taoriver.net
  • futures.wiki.taoriver.net
  • intcomm.wiki.taoriver.net
  • interwiki.wiki.taoriver.net
  • notebooks.wiki.taoriver.net
  • onebigsoup.wiki.taoriver.net
  • onebigstruggle.wiki.taoriver.net – MarkDilley: is this in use?
  • papertalk.wiki.taoriver.net
  • visual.wiki.taoriver.net
  • wikifeatures.wiki.taoriver.net
  • wikinodes.wiki.taoriver.net
  • wiki.taoriver.net

More, if people raise the point.

We'll see how this goes. Squeeky wheels will be served first.

Sounds like a good reason to switch to OddMuse!

Btw, I have preliminary "wikicp" functionality built on top of WikiGateway, in case you ever want to migrate a wiki to OddMuse. It is not extensively tested (in fact, not tested at all on MoinMoin, but the underlying WikiGateway functions have been tested on MoinMoin), and it doesn't convert between wiki syntaxes (although if you have the regexes for the conversion that you want, that could easily be added), but it does copy the pages' (source) text.

The syntax is like this:

./wikicp  --st oddmuse1 --dt oddmuse1 http://www.emacswiki.org/cw:* http://bshanksserver.dyndns.org/testwiki/wiki.pl

(I didn't actually finish the test b/c it would have taken forever, but the first 2 or 3 pages seem to have worked!)

Since WikiGateway is currently a mishmash of Perl and Python (in the middle of a Python rewrite), you may as well hold off on installing it until it's all Python, so I'd be happy to run the script from here if you want it.

It's really tempting, and I'm thinking about it.

You know, we could always run the regexs later if we wanted to….

Why not setup an empty OddMuse or two, tell me what to copy over, and we'll see how it feels?

OK, I started a WikiFeatures on the OddWiki. I'm planning to copy the MoinMoin WikiFeatures from it. Right now I'm having some sort of strange character encoding problem when fetching pages from WikiFeatures's moinmoin GetPage? XMLRPC method. Normal English pages like WikiFeatures:BayleShanks come through like this:

H\x00\x00\x00i\x00\x00\x00\n\x00\x00\x00\n\x00\x00\x00P\x00\x00\x00l\x00\x00\x00e\x00\x00\x00a\x00\x00\x00s\x00\x00\x00e\x00\x00\x00 \x00\x00\x00v\x00\x00\x00i\x00\x00\x00s\x00\x00\x00i\x00\x00\x00t\x00\x00\x00 \x00\x00\x00m\x00\x00\x00y\x00\x00\x00 \x00\x00\x00w\x00\x00\x00e\x00\x00\x00b\x00\x00\x00s\x00\x00\x00i\x00\x00\x00t\x00\x00\x00e\x00\x00\x00 \x00\x00\x00a\x00\x00\x00t\x00\x00\x00 \x00\x00\x00h\x00\x00\x00t\x00\x00\x00t\x00\x00\x00p\x00\x00\x00:\x00\x00\x00/\x00\x00\x00/\x00\x00\x00p\x00\x00\x00u\x00\x00\x00r\x00\x00\x00l\x00\x00\x00.\x00\x00\x00n\x00\x00\x00e\x00\x00\x00t\x00\x00\x00/\x00\x00\x00n\x00\x00\x00e\x00\x00\x00t\x00\x00\x00/\x00\x00\x00b\x00\x00\x00s\x00\x00\x00h\x00\x00\x00a\x00\x00\x00n\x00\x00\x00k\x00\x00\x00s\x00\x00\x00 \x00\x00\x00o\x00\x00\x00
...

since i don't know much about character encodings, do you recognize this? Basically, "\x00\x00\x00" is put in between each character. Is this Unicode or something? I'm investigating…

wait, the damn thing prints fine, so i guess it's unicode. How do I tell Python to convert it to ASCII?

I managed to evade the problem by using wiki XMLRPC protocol version 2. But I'd still be interested in what the Python commands would have been to convert the above to ASCII. s.encode('ascii') and s.decode('ascii') didn't work.

That looks like 32-bits per character, so I'd say it's some form of little-endian utf-32.

And for some strange reason, Python only comes with "utf-8" and "utf-16" as valid "decode" values.

You can:

 >>> bytes = "H\x00i\x00\n\x00"
 >>> unistring = bytes.decode('utf-16')
 >>> print unistring
 u'Hi\n'

You can do that for either "utf-8" or "utf-16." But for some reason, I can't say "utf-32" or "utf-32LE" (LE=little endian). I have no idea why. I also don't know how it is that my Python programs are producing UTF-32 for you..!

I've been wanting to diagram how Python unicode works, like how I diagrammed it's time use, and regex use.

Basically, "encode" is meant to be called from unicode data, and "decode" is meant to be called from bytes data. Continuing from above:

 >>> bytes
 'H\x00i\x00\n\x00'
 >>> unistring = bytes.decode('utf-16')
 >>> unistring
 u'Hi\n'
 >>> unistring.encode('utf-8')
 'Hi\n'
 >>> unistring.encode('utf-16')
 '\xff\xfeH\x00i\x00\n\x00'
 >>> unistring.encode('utf-32')
 Traceback (most recent call last):
   File "<stdin>", line 1, in ?
 LookupError: unknown encoding: utf-32

I'm guessing that the "\xff" at the beginning of the utf-16 encoding is a byte-order marker, saying "this is little endian."

I learned about unicode stuff about 2-3 weeks ago. I kept notes about what I thought were the largest mental misconceptions, and what were the most revealing ways of thinking about it. Sadly, I've forgotten about all that. (Should'a documented it in the wiki!)

In Python, the data in a unicode or byte string is exactly the same. The difference is only in how Python treats and presents the data. I found it super-helpful to not think about what the console said, or work with the console, because the console lies. That is, the characters go through conversions even being printed to the screen: Your console has an understanding of encoding, and your fonts have an understanding of encoding, and I had a lot of difficulty seperating it out.

I had a lot easier time thinking about the concepts, instead of the concrete representations. (Which is opposite my usual course of thinking.)

"Decoded," to Python's mind, is data being treated as unicode data. "Encoded," to Python's mind, is data being treated as bytes. The data isn't actually changing form, at all. It's just the treatment of the same data that is being changed. But there is no actual conversion taking place.

So, you only ever run "decode" on a byte string. (Another thing: Don't think of native Python strings as "strings." Think of them as "bytes." And indeed: In the new Python 3.0, they're calling it just that: strings are called "bytes" in Python3, and unicode strings are called just "strings" in Python3.)

So you can decode bytes, and encode unicode strings.

Don't think about decoding unicode strings, and don't think about encoding bytes. The bytes are already coded. Only unicode strings live in pure, abstract, heavenly, platonic form. There is no code there, only perfect clarity. (At least, that's how Python makes it seem for you.)

Again, sadly, I have no idea how to get from UTF-32 to Python unicode. I don't see the path. I saw something somewhere about being able to compile something in to your Python.

That said, if I'm actually serving UTF-32 to you somehow,… …then there's probably a way I just don't know.

I'm cross-posting to PythonInfo:Unicode

arrg, there seems to be a bug in moinmoin that prevents it from serving pages with non-ASCII titles via XMLRPC! I reported it: http://moinmoin.wikiwikiweb.de/MoinMoinBugs/XmlRpcUnicodeDecodeError

i'm going to work around with action=raw

What, no gfxalgo.wiki.taoriver.net ?

I wouldn't mind if all its pages were moved over to visual wiki. (What to do about pages with the same name on both wiki ?). (I guess I need to start Visual:CategoryAlgorithm ).

Maybe we should wait until Visual:RecentChanges scrolls by too rapidly to follow, then try to split the visual wiki into multiple wiki.

/me crosses fingers

Back to Unicode: I looked up Wikipedia:Byte_order, and I was totally astonished to find my name at the bottom of the page. Someone thinks I'm some sort of expert. The poor, deluded fools :-) .

/me looks astonished

(I really need to add my own visual expressions, right, Alex ? Um… How ? VisualExpressions? 2004-07-10 is a bit vague …)

I added some to DenotingAuthor. What do you think? – Alex.

I noticed how parts of WikiFeatures got moved into Oddwiki. Interesting!

I'll try to document regular expression fixes as I find them:

   "^ *"   "*"
   "``"    "\"\""

In page names, replace spaces with underline ("_"). There is no need to replace spaces with underlines in free links.

I would like to keep it, I do use it, rarely currently, but I have plans. If you wanted to kill it, I would be ok with that. Just would want to have the raw data to seed a future wiki. Best, Mark

Thanks for the regexs Alex, keep 'em coming. As I said above, we can iterate over the database with a regex later, so at first i'm trying to just copy the stuff over. Eventually, of course, the idea is that the "wikicp" script would do both at once for you.

After the encoding thing and the MoinMoin bug, I initially hadn't bothered to put in any exception handling, and so the thing crashed in the middle a few times, which is why only part of the wiki was copied over. I only added one exception catching clause, but maybe that'll be enough. i'm rerunning/debugging the thing until i get it right, i.e. it goes all the way through. it's not quite getting every page (it's had errors with a handful so far, mostly ones with apostrophes or spaces in their names), but that's okay, i'm just keeping a log of the ones it misses.

Btw, the script sleeps for 10 seconds in between each page copy to avoid getting throttled by the server (or overloading the server).

OK, now I think everything's been copied over from WikiFeatures to the OddWiki version of it except for:

  • pages with apostrophes in the title
  • pages with spaces in the title

Please lemme know if you notice anything else amiss.

I think the problem with both of these is due to the way I meld the old Perl and the new Python in the WikiGateway library. Since the plan is to eventually have only Python and dispense with the Perl, this shouldn't be a problem after that.

There should be about 33 such missing pages.

In addition to those pages, there were a few pages that didn't copy the first time due to network errors (timeouts, etc), which I copied over just now (with wikicp).

Unless there's something I missed, I won't be re-copying those pages again. So you can start editing the OddWiki WikiFeatures (at http://communitywiki.org/odd/WikiFeatures/FrontPage) without me overwriting it.

Wow, cool! Thanks, Bayle.

Define external redirect: GetPage VisualExpressions

EditNearLinks: MoinMoin

Languages: