The only descriptions of how to do HTTP calls in Processing that I could find were messy or weird, and required things like writing out the actual HTTP GET syntax in a string, or using a wrapper for the Apache JARs. I wanted none of this, and after some digging and experimentation, I cobbled together this solution, which is mostly based on Jack Kern‘s code (linked to below) that uses an alternate method for the use of the Apache HTTPClient library, since the code described in the Apache docs didn’t work at all.
If I never just sit down and push massive amounts of thoughts and sources out, I’ll just never get around to writing a well-composed blog post, hence this. Various items that I’m thinking about or I’ve come across recently. Sorry.
First up is the glut of material I’m currently wading through/researching for Music Hack Day Boston 2012 at the MIT Stata Center. You can learn about it here:
Let’s see, what else. There are all of the APIs & Data available for the Hack Fest/Competition:
The Echo Nest: http://developer.echonest.com/
This Is My Jam: http://www.thisismyjam.com/developers
Free Music Archive: http://freemusicarchive.org/api
Here’s how to get the EchoNest Hotness factor of an artist (in this case, 13 & God)…
Here’s the EchoNest “Sandbox” of exclusive media for Air…
And on a CEMI-related note, this good technical description of Binaural Room Simulation, from some activity by an attendee of the Tapped NFC Hackathon, which happened today, speaking about Augs glitch-tripping (time-shifting all audio by minutes, hyper-filtering of visual field, flipping audio frequency bands) wearing Oculus Rifts and trading glitch/aug-configs via RFID bracelets. (yeah.)
Crashing forward, a slew (as usual) of Linked Data & Semantic Web content/news/tools. The LODLAM (Linked Open Data for Libraries, Archives, and Museums) SUMMIT 2013 & Challenge in Montréal, which I am intent on going to. This tome on the advantages of RDF from one person’s perspective. The LODGRefine tool from the LOD2 Project (who’s link seems to be dead ATM), which is an extension of the Google Refine (formerly “GridWorks” from Metaweb, makers of FreeBase, before the Goog wisely snatched them up), and RDF Refine, that LODGRefine is partially built with, which can reconcile against SPARQL endpoints and RDF dumps (you know, for the next time you’re excited to do that). OpenCalais for a bunch of meta-data enriching, discovery improvement, and NLP on text-based assets from (to start with) the Open Access collection in the repository I work on, DSpace@MIT. Learning about the Open Annotation Data Model, and trying to remember the name of a temporal database platform that a colleague was talking about the other day.
3. Music For Production/Momentum
I have been trying to keep track of music that makes me more productive. Generally this has minimal verbiage, and is some form of electronic. I always come back to Underworld as a rule.
Dan Deacon has been added to the list.
And I’d include some of the binaural beat tracks I’ve collected over time. Also, “A Hawk And A Hacksaw” because it seems to match the pace of my life a lot of the time.
Also, when I have something in my clipboard, I can feel it in my left index finger and thumb.
Until the next overflow!
The two facets / frames of references that I find the most useful to my personal definition & experience of art and its purposes are:
1.) An artwork’s ability to convey an emotional reality (or at least “internal” state) from the artists’ mind to the observer / consumer’s.
2.) An artwork’s ability to represent knowledge (or information; the distinction between which I don’t intend to get into here) in a way that is abstracted from the direct, original representation of it, in a way that is more compelling, or conveys more (perhaps the thread that ties #2 into #1) than if it were to be simply echoed.
Daniel Libeskind‘s “eL Masterpiece” chandelier, for me, is a fascinating representation of the latter (specifically because of the scaled translation of time, and non-metaphorical use of light, (not to mention the use of super-computers to generate the data that it surfaces).
I also very much like that it is a real-world (what Aurelia Moser calls “analographic”) data visualization object, giving it more of a visceral “weight” than one that only appears on a screen. This does not add any ability or functionality, but it is hard for me to think that there isn’t something much more communicative or transmittal because of its actual presence.
It is difficult to say, not having experienced the “playback” (?) of it in person (or even on-screen; link anyone?) whether it is more compelling when one is standing in a room with the piece, which perhaps brings us to the realm of discussion in this Idea Channel webisode…