h a l f b a k e r yIt's the thought that counts.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
This is a piece of HARDWARE that runs your phone and all your apps.
Not just simply going into a single app, but doing everything needed to be done ACROSS the apps.
It replaces the phone interface WITH A VISUAL INTERFACE of its own. A very simple one at that, usually with only three choices,
and never more than 5 on the screen. It also has an audio interface for chat and for audio cues. It combines and presents results as requested.
It lets you do things that the app cannot do, or that the assistant (Siri, Cortana, Google Assist) cannot do, or that can only be achieved with a combination of apps. It can do in one step what takes a few steps to achieve in the app.
(Examples from Android)
Read alarm name (Does the several steps needed including save)
Set a repeating task except on Wednesday, with a specific ringtone and alert repeating until stopped, and snooze options. (Uses Alarm - for the ringtone and repeating alert. Tasks - for marking as done. Reminders for calendar. Upon marked done leave trace in spreadsheet and delete alarm and reminder...)
See the explanation on the Intelligate link for how this uses a limited dictionary of expected and needed input, and how it works differently from deep learning and LLMs.
The intelligate patent
https://patents.goo...tent/US7216073B2/en Simple example: for flight booking website asks and retrieves interactively and in natural language for info needed by the online web forms: source destination preferred price and time etc [pashute, Aug 23 2023]
Vernon...
Alternating_20Respo...0Radiate_20Reaction ...we hardly (lately, anyway) knew ye. [whatrock, Aug 24 2023]
[link]
|
|
Ah, but this is a piece of HARDWARE that runs your phone and all your apps. |
|
|
Not just simply going into a single app, but doing everything needed to be done ACROSS the apps. |
|
|
It replaces the phone interface with a visual interface of its own. A very simple one. |
|
|
As usual I need to edit the idea for clarity. |
|
|
Just a simple example: I don't know Siri and the IPhone alerts app, but in Android, you can't ask Google Assist to set the Alarm to "Read out the Alarm name", but you CAN do that as a human in the app. |
|
|
You cannot set a specific ringtone for certain Google Tasks if they are set as a reminder with a date and time. You can't make the reminder alert sound repeat more than once. In repeated events you cannot set days that the event should not show up on, or that the reminder should not ring on. This is not available in the user interface at all. |
|
|
And you cannot show a calender title event (like Hebrew dates) conditionally so that only if there is an event on that date you'll see the Hebrew date as well. |
|
|
All this is easily achieved by combining smart actions and combining different apps together: |
|
|
Alarms for the reminder sound, but at the same time a calendar entry and a task combined. When task is marked done, the other two are deleted, and a trace of them having been there is recorded. Copied events from a calender for hebrew dates, and many other automated smart tasks, that can easily be deduced once the new interface knows all the apps it has at its hands and how to interact with them. |
|
|
please read the revised idea and tell me if you still think its unoriginal. |
|
|
For a hot minute there I thought Vernon had resurfaced. |
|
|
You make it short and get boned for being unoriginal. |
|
|
You explain it and get boned for being too long. |
|
|
Oh well, here goes. Last attempt... |
|
|
I didn't vote one way or another, sorry. I just don't feel like I understand it well, so I abstained. Vernon was known for extraordinarily long winded posts. Very Ent-like. It takes them a very long time to say something, so they don't say anything unless it is worth taking a long time to say. Wasn't intended as an insult. |
|
|
"It looks like you're writing a letter!" I got flashbacks of Clippy reading this...
So, it's just "better voice interaction with a better display" for everything?
//without you ever having to "learn" that app// You might not have to, but somebody does, in order to have your system understand the app.
And why not "say AI"? Something this complex will almost certainly need it (or become it).
I think you'd be better to start completely from the ground up; your own OS with your own interface & your own "apps" running in the background. So you know everything will integrate, rather than getting different systems to work together. |
|
|
A similar thing has been done by an Israeli company with legacy mainframe apps when the first web UIs came out in the late 1990s and early 2000s. Before that the doctor had to know to write G863 to get to your medication list, and then in order to save it she had to press S333 and her password, but only if it was Tuesday. |
|
|
I was the CTO of a company called Generize which was supposed to make automatic UIs from data and database structures and existing SQL queries. So it's possible to do this. |
|
|
I also advised for a company called intelligate (non-existant anymore) that could ask you questions about missing info, according to the websites that it is interacting with, in a natural language chat, and then consolidate your answers into the forms of those web apps in an interactive way. (I just found their patent online!!!) |
|
|
Later I myself developed with my team at NDS a system to automatically test TV set-top receiver-converters with similar technology. So that's another part that's doable |
|
|
[neutrinos shadow] no. Of course we the new-phone prgrammers have to learn the apps, and mark them out with a markup. And then teach our new UI to interact with the old one. It will be a constantly improving process. |
|
|
But the USER is shielded from seeing all the crappy apps and having to install them and try them out. The USERs are the ones who won't have to do lengthy actions where simple ones should be available. |
|
|
Would the phone phone help the phone AI better understand what I say, the instructions I give it, clear up typical (daily) misunderstanding of what to do, monitor, ignore, remember etc and to play Jethro Tull's Stand Up album at 6:30am weekdays but not Wednesdays when I'd instead like it to play To Cry You A Song until I say stop (which may never happen)? |
|
|
So does this thing clip on to the front of your phone, and have little retractable soft pads that push onto the screen to control the phone interphace? I like that idea. You could stack these to the ceiling. |
|
|
The idea for this device seems to come from frustration that the regular phone interface doesn't let you do everything you want it to do. All phone (and computer) user interfaces (UIs) are restrictive* and sacrifice universality - i.e. having the ability to make every possible command - for immediacy and safety; immediacy, because a restricted interface makes it easier for a user to pick the most appropriate command from a set of options, and safety because a restricted interface also makes it easier for the software to protect the device and its data from erroneous commands.
In this idea you layer another computer with its own UI on top of the first. This second UI will be able to combine commands from the first UI in new and interesting ways, but will not be able to extend or go outside of the restrictions in the first UI, so will remain restricted. In addition, this second UI will inevitably also have to be restrictive in order to make itself usable and readily understandable to the user. Might not the user get just as frustrated with the restrictions in this UI?
[* except for an ideal Turing Machine of course] |
|
| |