Pages

Sunday, September 2, 2012

Raspberry Pi & Spotify

Like other people we have a radio in the kitchen. However, ours is so old that you don't want to touch the volume control, because the random noise produced by doing so gives you a headache. Now, since I am a proud Raspberry Pi owner as well, I had this idea to use it as a spotify-kitchen-radio, meaning: have the RPi in our kitchen, connected to the LAN, some speakers plugged in and spotify running. However, there is no ARM-build spotify client, so it's not that easy. However, there is an ARM version of libspotify, that provides an API to spotify's service (you will need a premium account, which I recommend to everyone anyways - spotify is really awesome and its only 10 bucks per month). So here are some comments, preliminary results and some advice on how to make the spotify API work on your RPi (and at least being able to play your playlists) 

1.) If you use Raspbian - that won't work, since it is build using the hard float ABI capability of the ARM. If you try to install libspotify and make the examples work, eveything seems find at the beginning, but then you get an error like
libspotify.so.12.1.51: cannot open shared object file: No such file or directory
This is due to the fact that libspotify seems to have been build using soft-float ABI. As long as spotify doesn't release a hard float build, you will have to go to step 2. This insight is the crucial part of the game here (and a "cannot open shared object file" error is not an obvious hint in this direction) 

2.) If you want libspotify to work, you will have your RPi running the soft-float build of Raspbian, also available here: http://www.raspberrypi.org/downloads 

3.) Once you have that, things are straight foward. Download libspotify from the spotify developer page and follow the instructions in the readme. This is also where you will need a spotify premium membership in order to get an appkey. 

4.) In order to test it, you can use the jukebox example. Simply, after building it, run jukebox/jukebox. It will ask you for your login credentials and a playlist to run. If you don't hear anything, try another playlist. This "terminal version" of spotify seems to not tell you when a title is not available anymore, but instead simply keeps silent. 

Advice: The jukebox example requires you to have alsa installed and *configured*. So, before testing the spotify api and complaining that it does not play any sound, you should configure the sound card. See e.g. here or simply google for raspberry pi and alsa Have fun! 

PS: As a kitchen-radio this is still a bit uncomfortable. What I would ultimately like to have is a LAN-internal web-interface to the pi and libspotify, so that from every computer/tablet/smartphone in the LAN, I can access the local web-interface and search for/play titles, artists, albums ...

Sunday, March 25, 2012

Information ranking based on social media

Importance ranking of webpages was suggested to be more and more based on "social signals". I.e. how often is a webpage shared rather than linked. But this raises questions like: will the importance given to a shared piece of information differ by the "social" status of the person who shares it? I.e. is some link shared by Barack Obama "more worth" than some link shared by me? If so, who decides who is "more trustworthy"? These questions haven't been answered. However google & co. started implementing this kind of social ranking already. If you have a g+ account and you do a google search, eventually you will find "personal search results", based on the things shared by people you have in your circles. And to be honest, this service is amazingly useless at this stage. Let's say I perform a google search "android tablet". Most likely I am looking for some product information about android tablets or some wikipedia entry or whatever general information. However, the "personal result" only seems to perform a full-text search over all the posts of the people I follow on g+. A full-text search...that's it? Is this supposed to be the new awesome world of social ranking? There is no useful information in the 110 personal results whatsoever, since most people mention the terms "android" and "table" in a rather specific content: either they are talking about an app or a special feature of some android tablet or the success of android tablets in general or ... In this respect the "social signals" are not used in a constructive manner - they just add more clutter to the other 530.000 search results. The challenge will be to add a useful social dimension to improve information filtering. And I feel we are far away from that. Something else is needed here.

Thursday, December 15, 2011

reply to "Peer review without peers?"

I just came upon this post by Aaron Shaw about a somewhat unusual idea for the scientific peer review process. Since I did not want to leave a lengthy text in the comments section of his post, I decided to put it here. Aaron, I am happy about comments you might have.

So here is the thing: we (here at ETH) were thinking quite a bit lately about issues of scientific evaluation and peer review. In this vein, especially the following questions arise: 1) How can one judge the value of research performed in an interdisciplinary research environment and 2) How can we get *good* research by *unknown* people in high-impact journals and *bad* research by *established people* out of them, prohibiting a view scientists to de facto decide what is "hype" at the moment and what is not. But I will try to post about this another time. So let's talk about Aaron's post.

Aaron is basically talking about the idea to use wisdom-of-crowds effects for scientific peer review:
...what if you could reproduce academic peer review without any expertise, experience or credentials? What if all it took were a reasonably well-designed system for aggregating and parsing evaluations from non-experts?
And he continues:
I’m not totally confident that distributed peer review would improve existing systems in terms of precision (selecting better papers), but it might not make the precision of existing peer review systems any worse and could potentially increase the speed. If it worked at all along any of these dimensions, implementing it would definitely reduce the burden on reviewers. In my mind, that possibility – together with the fact that it would be interesting to compare the judgments of us professional experts against a bunch of amateurs – more than justifies the experiment.
First of all, I agree, it would be an interesting thing to test, whether non-expert crowds might perform as good as "experts" in a peer review process. Here is my predicted outcome: for the social sciences and qualitative economy papers, this might be the case most of the time. It will *not* work for the vast majority of papers in the quantitative sciences. But this is actually not the point I want to make here. The point is the following: what Aaron and people were thinking of is how to "speed up" the peer review process and "...reduce the burden on reviewers." Humbly, I think those are *completely wrong* incentives from an academic point of view. Reviewing is a mutual service scientists provide among their peers. When our goal is to reduce "the burden" of reviewing so many papers, we all should write less. (This might be a good idea anyways). Also, the problem with peer review without peers: non-experts will not know the existing literature and redundancy will be increased (even more) and this is something you can not get rid of without peers. If we however would go into this direction the "reviewing crowd" would basically be a detector of "spam papers" and nothing more. But those are also not the papers which need a lot of time to review, they are often very easy identified. What really is it that makes peer review so time consuming is a) the complexity of papers and b) the quantity. We should not aim at reducing a), because this is just the way it goes in scientific evolution: once the easy work has been done, the complicated details remain. (Einstein famously (supposedly) said that he does not understand his GRT anymore, since mathematicians started working on it). So I assume, in order to get rid of all the papers to review but maintain scientific excellence, option b) is the only choice. And, as I said earlier, this might not be a bad idea at all. It might also have a positive effect on the content and excellence of the published papers. But decreasing the number of published papers is complicated and would require us to *rethink* how science is done today. But this is material for another post.