This would start as a telephone-based facebook for the
blind
community and then incorporate Amazon and Google-
like features, but with an
emphasis on real-time voice-to-voice communication.
This system
would allow people to talk individual-to-individual, one-
to-many,
or
individual-to-the-system.
Since this would be an open source project, hackers
could develop a spoken audio language that would allow
them to
change "who they
are talking to" and "what they are talking to them
about", at the
same time.
This was inspired by the book "Exploding the Phone"
which is about hackers,
many of them blind, who hacked both the AT&T routing
system -- by
whistling --
and the system of human operators -- by pretending to
be people who they were not.
Since: 1. evolution seems to have built people -- first
mouth, then nose,
then
eyes,
then ears,
and ears process less data but use the brain more like a
database,
and 2. we inherited a mostly-spoken, audio language
which
was the
solution to millions of years of experimentation;
then
maybe we should
be
preparing for computers to go the same way, rather than
preparing
for them to become the hyper-visual super tv's that they
seem to be becoming.