AlterFace (1990, imagining computer-aided interaction in 2014)
October 8, 2014
I don’t know why I have been feeling bad for relying so heavily on my AlterFace. I obviously can’t object to the translation function. With it, travel has become a lot simpler than it was five years ago. As long as I bring my box and plug in my earphone, I can forget about all that je ne parle pas francais hassle. The box does the work, and the locals and I understand each other just fine.
I also don’t mind the basic “E/E” English-to-English translation feature. Up until three or four years ago, it was always such a pain to try to communicate with people like my mechanic. He had all those ethnic ways of expressing himself, and I never could figure out what the hell he was trying to say. Now, my earphone gives me a translation in words that I can understand.
When I bought AlterFace release 3.0 in 2011, I was really psyched about new advances they had made in E/E. They had moved beyond obvious intercultural problems between people like my neighbor and me. Release 3.0 tackled intracultural E/E: intergenerational and even intrafamilial. It was like being back in the Age of Aquarius or something. Everybody suddenly understood everyone else.
At first, nobody really used the Chrono feature of that release. With that feature, you could use the intracultural E/E translator cumulatively. If you bothered to save all the stuff that someone had been saying to you, you could run it through the translator and come out with a totally synthetic version of the speaker’s words. it was usually a lot more precise and to the point than the words that people actually use as they ramble around and try to get a verbal handle on things. If you turned the compression up high enough, you could boil even a dull person’s thoughts down to an interesting core.
Making the translator work that way was pretty difficult, and nothing much came of it until release 3.2 last year. In that release, they automated the cumulative translator, and they made it operate in real time. They also added features to distinguish and categorize the sources of the input.
At that point, several interesting things began to happen. First, when an old friend was talking to me, I began hearing my box beat him to his own point, telling it to me through my earphone even while he was talking. Usually, that was because he had already made the point previously. Sometimes, though, it happened because the machine was able to identify him as belonging to a certain input group and, in reviewing what that group had told me in the past, was able to reach a summary of something he had never said before, as soon as he had said enough to let the computer guess where he was going. Sometimes the machine didn’t get into this mode until he’d been speaking for a little while – sort of like a car going through first gear, second, etc. until it finally reaches highway sped.
It got worse when I started fooling around with the program. With single-source chronological analysis, I was able to identify an awful lot of instances in which a particular individual had contradicted him/herself or had said the same thing as everyone else. More importantly, in personal matters, it frequently turned out that everybody was just playing back to me the stuff I had told them, mixed up with folk wisdom that I had long since accepted or rejected.
I understand that AlterFace Corp. got a lot of correspondence from users, asking for a new release to deal with these problems. I got my copy of release 4.0 a couple months ago. The Chrono feature was renamed Diary. It includes a screening mechanism that permits me to squelch relatively useless input sources, so the machine and I won’t have to waste time considering and dismissing their drivel. Instead of hearing an E/E translation of their words, if the squelched individual is someone I’m supposed to smile at, the box gives me a droll summary of some of the sources’s more bizarre utterings from the past. If it’s someone I’m supposed to look at seriously, the box starts reciting the source’s past sins.
Not long after getting release 4.0, I started running the cumulative translator against the Great Books of the Western World series that came packaged with my operating system. I’d like to do some more tinkering, but I think I’ve already boiled it down to a database that contains the gist of human wisdom on personal matters that interest me.
So I still have conversations with human sources, but only about everyday functional matters. When it comes to matters near to my heart, personal questions, I turn to the box. I make much more progress that way.
I admit, there’s something missing. Years ago, I was still having long conversations with friends in which we flailed around, very inefficiently, to solve my problems or theirs. We got mixed results, but at least when something went wrong in my life, I was able to go to one of those friends and tell them, and they would already know the whole background of the story, and if I needed a shoulder to cry on, they understood that.
Nowadays, if I wanted someone to be helpful and caring, I wouldn’t know where to start. I’m sure I would have to spend hours, or even days, bringing them up-to-date on what I’ve been doing. We could probably wire our boxes together and make it happen, but they’d still have to hear that material from their boxes and, frankly, since I don’t know what’s actually getting through to them, I’d rather tell them myself.
The problem is, if I thought it was difficult to get a friend’s ear a few years ago, what with their jobs and families and other all kinds of other distractions, it’s damned near impossible now. In the few times I’ve tried, they have always wanted to know, first, how come I couldn’t just send them a message that they could run through their synthesizer. And anyway, my friends all use AlterFace pretty intensively, and I think we’ve all become fairly uncomfortable with unassisted face-to-face conversations. I suppose we could use AlterFace in a low-level mode, but that would be something like driving a Ferrari at 20 MPH to see what an old Model T Ford must have been like.
At present, it’s fine, I guess. But I keep thinking that I’d like to get married someday, and I’m not sure how I might meet a candidate. The online dating services are useless for finding a mate, because with AlterFace we all sound alike. I’m hoping the next release will make some advances in this area.
Filed under: Uncategorized | Leave a Comment
Tags: aided, computation, computer, forecast, future, interaction, interface, interpersonal, predicting, prediction, predictions, prognosticate, sci-fi, translator