Nice – a full periodic table produced by 96 different printmakers, the elements rendered in every combination of woodcut, linocut, monotype, etching, lithograph, silkscreen, and collage. This one’s Tungsten (aka wolfram). Unfortunately they don’t all have that much scientific info attached to the images, but you can always get that here.
via Bionic Teaching
A good example of how mixed technologies can pretty the place up a bit. It’s a macro display system that responds to mobile txt input with movement and sound.
In each installation, participants send their thoughts and questions via SMS and voicemail. The responses are then projected and added to a dynamic spatialized audio composition.
Vodpod videos no longer available.
Now – what was it we were going to do with the wall of the EIM
Some quick run throughs on what you can do with Google maps, I hadn’t realised that you could embed youtube clips in the tags – cool.
Vodpod videos no longer available.
Tony Hirst at the Open University has put together a nice custom search engine called “How do I…?“. It restricts it’s search to sites that offer “howto” videos, and the few tests I ran on it (how do I upload to flickr and how do I change screen resolution etc ) were pretty successful – ie. a decent answer on the first page. The format encourages a natural language search – you only need to think about what you want to do and then complete the question, How do I…?
Just tried it with How do I… make falafel and it was fine for that as well.
It sparks my curiosity because I’m convinced that we will need a different “model” to support web2.0 and mobile learning – we can’t possibly do it by sending people out to visit users or equipment that are having problems. User’s will certainly need to be more self supporting, but we will also need to show more intelligence in how we provide assistance – knowledgebases and community support (users helping each other) are essential if we’re to consider supporting 10’s of thousands of people using scores of different services (many of which we will have no control over). This is talking about the future obviously, but it will come.
These thoughts dovetailed neatly with a post in moblearn, the informal blog behind the people working for Tribal CTAD. I’m sceptical about the value of authoring tools for mobile learning in F.E (primary maybe), as ubiquitous learning – self documentation – micro-content/reinforcement exercises via twitter et al. seem to be the things that are working: but these are the people linked to the biggest initiative in mobile learning in the sector and this is what they have to say about “support” (their emphasis):
“And what about support? From our experience, we have NEVER had decent technical support from ANY of the phone companies who have supplied the devices.”
With a little research it turns out that RSS to Voice is not a new technology at all – there have been services around since at least 2005 but I just hadn’t heard of them. VocalFruits is *almost* but not quite, there. It gives you a public page that lists all your posts and was easy to set-up, but it’s a pay service (after the first 100 listens)
and it’s truncating the translations for some reasons, so I need to find something better.
Aah! Worked out why it was truncating – I had my feed settings at “Summary” rather than “Complete” – you set this from Dashboard/Options/Write. I’m glad I found that actually as it annoys me when sites only feed me a summary and it’s generally considered a bit *rude* in the blogosphere.
Anyway – hopefully that will fix VocalFruits, in which case I’ll probably just stick with that. I’ve registered the site at Talkr – another RSS-voice tool, so anyone who has an account with them can hear it, and I was looking at Odiogo, which seems to have the neatest solution – a button that you can insert in each post which opens a player on the page. It doesn’t work with WordPress at the moment though and it’s ad-supported, but voice quality is good.
So – what did I learn? Well firstly it’s a lot easier to just do this stuff and see what happens than it is to research it in advance and try to work it out in theory. Altogether I’ve spent about 3 hours on this – and most of that was troubleshooting/research, it only takes 5 mins to sign up to VocalFruits. Secondly, that this enables a feature that I didn’t have to choose – blogs give you an RSS feed automatically. It’s like having a web-page with an output on it: you may not know what you can plug it into but if it isn’t there you’re stuffed. What I’m looking for now is a service that will take the entire text of this site and stick it into a TagCloud so I can see what I’m talking about.
Oh, I just couldn’t resist it. Can’t say I’ve ever interacted with an advert quite like this before.
(click pic above to play)
The original is here and an alternative take is here – (the version above was shot in an alley behind the Savoy Hotel btw- and yes, that is Ginsberg).
via bionic teaching
Vocalfruits is a site that promises to take an RSS feed and turn it into a Voice feed (it’s tagline is “Compose your vocal information system”). I’ve only just signed up for it and don’t really get how it works yet – but as a test I’ve registered my first feed (this page) and I’m supposed to get a voice called KATE to read it back to me.
If I’m being a bit vague it’s because the site isn’t massively clear – translated from French I would guess and without much of a tour – it’s a partner to a site called xFruits that offers a variety of widgets for mashing RSS feeds and/or converting them to post/mobile/mail formats – but it made me go “Wow!” when I saw it and Web2.0 is a lot about the wow. So I’ve stuck some code in a text widget at the top of the page – lets see what it does.