Monday, April 13, 2009

Brown project spamming MediaWiki sites

A few weeks ago I noticed some very strange looking pages showing up on the TinyOS Docs Wiki which I maintain. These pages contained what appeared to be ASCII-encoded binary data of some kind, although the format was not anything I recognized. Cursory searches for what might be causing this turned up nothing, so I ended up spending a couple of hours locking down the site to prevent malicious edits.

Turns out this was (what appears to be) a student project from Brown called Graffiti which is intended to provide a kind of encrypted, distributed filesystem (I gather, since the paper isn't available) on top of "public" MediaWiki sites. (I should point out that the bogus pages on my site did not have the explanatory message at the top saying that they were related to this project - I guess this was only added in a later version of their code.)

The authors seem to be reticent about the trouble they have caused, but a comment that previously appeared on the project page suggests that they don't quite get why this is such a problem:
03/09/2009 - Rejection!
Our paper got rejected from IPTPS. One of the main points brought up by the reviewers was that our system was not a true peer-to-peer system. Most reviewers also seemed appalled at the idea of commandeering abandoned websites in order to store illegal content. Nevertheless, we are not deterred and will be searching for the next workshop/conference that is bold enough to take on the ideas of the Graffiti project!
(Seen on this discussion board.)

Now, while the idea of a distributed filesystem riding on top of "open" sites is cool, the way the authors went about this is problematic. Just because some MediaWiki sites are open doesn't make it OK to spam them with bogus pages for the purpose of doing your research -- I am sure this violates both the terms of service of Brown's network as well as the networks of those sites they spammed.

There are better ways to evaulate this system than to hammer on unprotected wiki sites without permission. They could have used PlanetLab and run their own wikis to evaluate the system's scalability and robustness. They could have asked permission from site owners with a promise to clean up after themselves after the experiments were run. I hope the authors are kidding about the "bold enough" comment above. It suggests they underestimated the legal and ethical issues raised by spamming open sites just to get a paper published, nor the amount of hassle they have caused sysadmins of the affected sites. I just hope they learned some kind of lesson from this.

2 comments:

  1. I am surprised that they did not use twitter! That way they could setup accounts whose (140 byte long) updates could be the content they want to distribute. Plus, since those accounts need not follow or be followed by anybody, it is possible that they are not considered spammers (and since the above is content only among users of the platform one needs to know who to read in order to reassemble the desired files).

    Since there exist many twitter-like networks now, one can distribute content both on users and networks.

    ReplyDelete

Startup Life: Three Months In

I've posted a story to Medium on what it's been like to work at a startup, after years at Google. Check it out here.