Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yup, I think what is novel here and led to these results is that the dolphin researchers invented a device for generically converting audio into pictures.


Looking into the device they're talking about here: http://www.cymascope.com/

Seems like a pretty cranky group unfortunately. (in the sense of astrology, physics cranks, etc.) :(

I'm thinking they may have just found a roundabout way of visualizing echolocation response rather than visualizing dolphin-dolphin communication as images. Guessing on the bits I've gathered from the various articles linked, it sounds like what they did was something like:

  - have a dolphin ping various objects with echolocation and record the response
  - play back the response for different dolphins and reward the dolphins with fish if they retrieve the object that matches the recording
  - see how good the dolphins are figuring out the mapping from sounds to objects (pretty good)
That's just my guess of what they did; as I said, it's not clear. (If that's actually what they did, that doesn't really indicate language capability, although it's still interesting.) Hopefully they clarify what their methodology was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: