Páginas

lunes, 28 de enero de 2013

When ShareIt! met Gnutella...

On the recent post on HackerNews has appeared a curious comment:
Hmm, it sounds like you're reimplementing Gnutella for the browser, which is not necessarily a bad thing!
I've never been a P2P guy since I was connecting to Internet using neigtbours wifi (and it was already dificult to do just plain e-mail and web surfing...), so the other times I read about how the diferent P2P protocols worked I only got the more superficial idea. That's the reason why this comment got my interest and take a look on wikipedia.

But this time it was diferent: I'm developing my own P2P protocol, so now I know what all that words means, and maybe just a trick of my unsconscious or maybe just serendipity, but I got a good surprise... both protocols and priorities ARE ALMOST THE SAME!!!! :-D This have two direct implications:

  1. ShareIt! and the WebP2P protocol are not a total innovation "of my own", so this would be a little bit annoying (the "not invented here" complex... hey, what's the matter? I was raised using Macintosh... :-P )
  2. on the other hand, having both projects the same concepts in mind I could learn from their errors, and also adapting the protocol to use theirs and creating proxies in the same way of the SIP-over-WebSockets ones, I would be able to access to the currently in active most used P2P network in the world by free :-D
I don't know, but by the moment, I'll try to send an email to the Gnutella guys to see what's happens... :-P

sábado, 26 de enero de 2013

Unexpected consequences

Yesterday I published about WhatAreYouDownloading, isn't it? Ok: this morning his author ShirsenduK has finally uploaded his project to internet, but also it put a post on HackerNews about it. I can only say one thing:

What a madness!!! :-P

The fact is that in just some hours that's got crazy and people it's totally enthusiasmated about my project and they are forking, publishing and starring it on GitHub like there's no tomorrow!!! And the same it's happening with WhatAreYouDownloading. In fact, in just 8 hours I've got a 50% more stargazers than in the previous 6 months. Pfeu! :-P However, the most annoying thing I found reading the HackerNews comments (appart from some interesting links about networks and P2P security) was:

An in-browser BitTorrent client using my technology?!?! O_o

I don't know if I should get happy about building it, or get angry because nobody told me it... :-P

Also, one unexpected side effect of so much forks is that I didn't knew that they are done also with their branchs, so the production branch is also being copied... and automatically published by GitHub :-D This has lead me to the decision that it's the best for the project to split WebP2P on an independent project and improve the documentation to make sure that all this forks follow a common standard and can keep living, whatever finally happens with the championship regarding to the source code repositories.

Finally, the revolution it's starting, and I have first row tickets and a pop-corn bucket to see it... :-D

Dear Santa...

...although a little bit late (or very early, it depends how do you see it) I'm writing to you to say that I've been a good child and I want this t-shirt:


Or if not, I want a pony. Or both things. Whatever it's easier to you.

Loving, Piranna


:-P

viernes, 25 de enero de 2013

The cake is NOT a lie

One month ago I published a post asking about if I could be able to get something better than a job thanks to ShareIt!... this week I got the answer to this question: that ShareIt! would be used as a basis for a product on a multi-national company :-D

This week I received an email from Shirsendu Karmakar, an engineer at SlideShare (recently bought by LinkedIn) telling me that he was impressed by ShareIt! and that he will use it for a free an open project. It's nice to know about it... but it's nicer that he give me administrator priviledges on the repository without asking about it :-P

So WhatAreYouDownloading it's mainly a fork of ShareIt! with a cleaner interface based on Bootstrap instead of plain jQuery (that by the way I was thinking about it at a first for UI homogeneity, althought it didn't fit too well with my original desing and I dropped it very early on the development process) but with a radical UI design more minimalistic (it reminds me to Windows 8 Metro) using dialogs instead of tabs for the diferent sections and, being sincere, with a better fit for mobile phones :-)

Main screen
Sharing "tab", now a dialog, the same as the other ones :-)
Also he linted the source code and did a code organization fairly interesting, using WebP2P as a library although I didn't finished yet to isolate it. What a nice piece of cake... :-)

The best of all, since it's just an aesthetic (and code) make-up, the WebP2P protocol is intact so WhatAreYouDownloading it's totally compatible with ShareIt! that was one of the design purposses I had in mind during development (allow others to develop their own applications being compatibles with other ones using the same protocol), so this made me think that I should split ShareIt! and WebP2P source code as soon as possible to allow more applications like WhatAreYouDownloading to appear :-) As a first approach, I'll try to re-merge the linted code inside ShareIt! source code tree and sync WhatAreYouDownloading interface to the latest modifications of ShareIt! code so they could interoperate better, let's see later where the rabbit's hole ends... :-P

sábado, 19 de enero de 2013

Backing-up

One problems that I have found on the IndexedDB specification is the fact that databases has a same-origin policy that don't allow to share a database between several pages. So what happens if the domain of your page gets down? If it's a normal webapp with a central database and IndexedDB is used as a cache not much, just that you'll need to re-sync your content. But what happens with a server-less, pure client-side webapp like ShareIt!? There's no central database where to sync, so YOUR data is isolated and inacessible in YOUR computer, and also according to the IndexedDB specification it can be deleted by the browser at any moment. Not a good thing.

A solution that was given to me was to do some tricks with the /etc/hosts file and load a local copy of the webapp, or at least a dumper application. Nice and intelligent trick, but not available for non-techies users. What's the other (I hope that temporal) solution? Implement a backup system for the cache. This way, the cache content can be stored on a zip file directly from inside ShareIt! and later used to restore it on the same computer or on your new bough machine, or also moved to another one and update its cache with the content of your backup and help your friends to finish their downloads :-)

Now I must to go, it's time to see what Kim DotCom has to show us... :-D


P.D.: I need to see if with AppCache being the application data loaded from localhost this happens too, but I'm not sure...

domingo, 6 de enero de 2013

Handshake over transport channels

Just some minutes ago got to be enough mature one of the piece of cake that I've been working on the last two weeks: clean-up of the handshake signaling code and adapted the handshake servers interface to be more like Message Channels. This way it can be used with regular channels like the ones from PeerConnection DataChannels so this way a peer can work like a regular handshake server enabling a signaling channel to connect two peers thorough it, and as a collateral damage, peers can be found over all the network mesh without requiring a handshake server on the cloud at all.
Wait, has you developed a distributed discovering routing protocol for browsers?!?!
Yes :-)

Since using a cloud service like PubNub is not the best idea regarding to the future of the network being the biggest point-of-failure of all the platform, the best idea is to move the peers routing to the webp2p network as soon as possible. To do this, when a peer get connected to one of the handshake server it keeps waiting until some more have been connected after it, giving them an offer to be connected to the network through them just like "reverse-proxies" (in the same way they were connected before), and when they get enough peers connected through them, they get disconnected. This way we can be sure that the network is sufficiently dense but also increase the mesh entropy not being all the peers connected through the same peers, making it more difficult to break down.

Also, since I have changed the signaling code to be a transport interface now the handshake servers code are as minimal as possible so it's easy to interchange with another one. Being the signaling (now routing) code a transport interface it can be used over DataChannels, so the next step was to know how to find the peers on the mesh. The easiest way it's just with a flooding protocol. It's not the most optimal regarding network resources, but since KadOH it's not currently ready (it mainly seems it's dead... :-/ ) at least it's easy to develop and a good starting point to start investigating from here, and it can find a peer over all the network if it exist (I need to think what to do if it doesn't exists... maybe using a timeout?) that's the interesting point. Now to know how a peer can get back connected I've done the easiest way to achieve it: just to append a list of all the peers where the "packet" have been routed, that can also be used to stop the propagation of the flood, so I get two birds with just one shoot :-D

Now the single-point-of-failure that's the handshake server still exists there (I hate you >:-( ), but at least  it will be pass so little time that a peer will be connected there before routing only over the webp2p network that trying to catch one of them will be almost be an homenaje to Christina Rosenvinge... :-P

miércoles, 2 de enero de 2013

How to hack a contest

During the last two weeks until the end of the year there was a contest on the Youth Card of Madrid webpage about finding their mascots dressed as the Three Wise Men.

The fact is that I have been looking for them and only found two of them, so I decided to make a mirror of the page and looking for locally... just to find that in fact there were only two of them ¬¬ After sending them an email to know if it was a bug or a bad trick, they confirmed me that the third one only will appear from time to time since she (it's the girl with glasses ;-) ) is very shy, but will appear at least one time until the end of the year on the morning. I didn't wanted to be pushing F5 all the time but I want to win one of the prices, so I decided to do it my way.

Challenge accepted

The first of all is to be able to know how to identify if one of the mascots is on the page, so when I founded one of them, quickly I went to the Chrome Inspector to look on the page content, seeing references to a nice "navidad2012" folder and that the animated icons where made on Flash. Now I know what to look for, so now it's time to know where to look for. Since the page is mainly static (bizarre, I know, but what would you expect from a gubernamental institute? :-D ), for this I've made a mirror of the website on my local harddisk using wget so I could be able to look on their files for that string. Since wget show a lot of output, I'll silence it with the quiet argument:
wget -mq http://$BASE_URL
and later, searched on all the files using grep looking for the previous checked folder and for the Flash files, getting only the path of the files that have them:
grep -lR 'navidad2012/.*\.swf' $BASE_URL/*
Now it's time to get notified by mail when the pages where found. This was a little bit complicated since GMail protect itself for sending spam, so some additional config steps are required so we send the emails authentificated with our own account. First of all, we need to install the mailutils package for the mail command and ssmtp package for the output mail server:
sudo apt-get install mailutils ssmtp
and later configured the smtp server following these instructions (I didn't need to follow the steps regarding removing sendmail, so maybe it's not yet necesary at all). After that, we now are able to send emails from the command line (content is introducced via the standard input):
mail -s "$BASE_URL" "$EMAIL"
But receiving an email each time it checks it is not a good idea since we are only interested to check it when we have the Three Wise Men, so we'll check it before:
FILES=$(grep -lR 'navidad2012/.*\.swf' $BASE_URL/*)
WISE_MEN=$(echo "$FILES" | wc -l)

if [ $WISE_MEN -eq 3 ]
then
   ...
if
Now it's time to let this run each 5 minutes. A simple infinite loop and sleep would be enough, but removing the useless files if we didn't found what we wanted would be nice (after being sure to don't remove them using break on the condition)...:
while :
do
   ...
   rm -r $BASE_URL
   sleep 5m
done
Et voilá! Now it only needs to add some echo messages to now how it's working when I'm on the machine (mainly studying for the exams I have after holydays... ¬¬), remove all data before starting to work, a shebang and execution permissions and now I can be able to go to the gym without worrying to don't get the prize :-D

Full code of the script:
#!/bin/bash

BASE_URL=www.carnejovenmadrid.com
EMAIL=piranna@gmail.com

rm -rf $BASE_URL

while :
do
    wget -mq http://$BASE_URL
    FILES=$(grep -lR 'navidad2012/.*\.swf' $BASE_URL/*)
    WISE_MEN=$(echo "$FILES" | wc -l)

    if [ $WISE_MEN -eq 3 ]
    then
        echo
        echo "*** FOUNDED 3 WISE MEN!!! ***"
        echo "$FILES"

        echo "$FILES" | mail -s "$BASE_URL" "$EMAIL"
        break
    fi

    echo "Only found" $WISE_MEN "wise men at"
    echo "$FILES"
    echo "Removing files"
    echo

    rm -r $BASE_URL
    sleep 5m
done