h a l f b a k e r yPoint of hors d'oevre
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
This is rather a "fools rush in" thing, but what isn't?
Graphics cards enable three-dimensional graphics to be drawn quickly and convincingly enough for computer games to use fairly convincing settings, but the AI side of gaming seems to have been neglected. So far as i know, which isn't very far,
software-based algorithms are still used for artificial intelligence in games. This appears to me, a non-gamer, to have led to a situation where there are visually and sonically impressive games which are, however, largely based on violent conflict. I don't necessarily have anything against violence in games, but it seems unimaginative and boring to me.
One way to redress this balance might be to have hardware which does for AI what graphics cards do for graphics. I envisage a multicore processor with some onboard memory, both ROM and RAM, which can do the following (bear in mind that i know nothing about artificial intelligence research, computer hardware or programming, so this is going to be hopelessly naive):
* Hardware realisation of a list-processing language and formal logic. The formal logic is fuzzy and multivalent in a different way than usual. Rather than having truth and falsehood as Boolean truth values, perhaps represented as real numbers with intermediate truth values between zero and one, several different systems of multivalent truth values are used, for instance tense logic, middle values for modal operators, a value for meaninglessness and so forth, in several dimensions, so that each truth value consists of a series of real numbers analogous to coordinates. The logical operators themselves are probabilistic and can be set to different types, so for example a particular operator could operate as an AND or an XOR with a probability of one in two. The variables include predicate calculus. There is also an inductive logic operator, allowing the device to conclude that if something happens a number of times with no exceptions, it will always be the case, but checking the result for error so that it ceases to operate when it finds an exception.
* A series of chatbots which respond to the input of text by producing a hopefully sensible result, like Eliza, Parry and so forth, including trash talk and troll bots for games, along with several other personalities.
* Two data structures, one in ROM, the other in some kind of non-volatile RAM, consisting of a network of concepts. One represents "common sense" and includes rules such as those of naive physics - e.g. the Y coordinate of a solid object will tend to reduce until its coordinates are adjacent to that of another solid object, those of an object which are incrementing will begin decrementing after a while, horizontally moving objects tend to slow down and (GASP!) centrifugal force - basically a long list of beliefs about the world which are false but common sense. Other options can be selected, Newtonian, Einsteinian and cartoon physics being examples. There are other things, like "if an object moves behind another object, it doesn't usually disappear" or "if a series of objects stop moving suddenly in a horizontal direction, they have probably met a window." - flies could do with this one. The RAM contains a network of acquired concepts which connect to each other, forming a kind of knowledge base linked like hypertext.
* Use of logic to extrapolate present situations to the future in order to anticipate events.
* Modules for speech and face recognition and speech production.
* Built-in generalisable tree structures representing some kind of decision-making process, perhaps including some specifics like how to play noughts and crosses, paper-rock-scissors and chess well, which could be generalised to other situations by trial and error.
Some kind of simulation of a simple human body in a simple world which allows it to exhibit embodied intelligence and do things like understand metaphors through synæsthesia, rotate or otherwise transform objects to compare them and so forth by running simulations of that kind of thing.
I know this amounts to a wish-list, is horribly crude and reflects some kind of failure to understand what's obvious to anyone in the know, but i still feel that some of this at least is possible. You then get some kind of facility like OpenGL and Direct-X to use it (is that what an API is?).
CNN: 'Brain' in a dish flies flight simulator
http://www.cnn.com/...H/11/02/brain.dish/ Here, a group of rat neurons is connected to electrical contacts and trained to play Flight Simulator. "It's essentially a dish with 60 electrodes arranged in a dish at the bottom," explained DeMarse, who designed the study. "Over that we put the living cortical neurons from rats, which rapidly begin to reconnect themselves, forming a living neural network -- a brain.". A hardware/firmware implementation/simulation of this using more traditional silicon components, would work almost as well, and wouldn't start getting smelly after a few days. [zen_tom, Feb 09 2010]
[link]
|
|
Problem: We know how to do graphics. We knew how to do graphics before we built graphics cards. They just meant that we could do fancy graphics with smaller overall computers. Woo-hoo for the consumer. |
|
|
We don't know how to do AI. There's no AI technique that people look at and go, "Oooh, if we could only do this three times as fast, we could win the Turing test. Damn!" |
|
|
Chat bots suck, and anyone trying to write a convincing in-game character will quickly notice that free-form interaction just doesn't work enough to be feasible. Not because there's not enough processing power; we just have no clue what to do with it. So, they go back to a model with a fixed set of variables and a choice of scripted interactions that modify these variables - and you really don't need a chip for that, you just need a decent writer. |
|
|
Some of this is surely doable though, isn't it? Chatbots exist, Lisp and Prolog exist and so forth. The stress is on the "A" here, not the "AI", if you see what i mean. Graphics in games aren't raytraced, so they must be a kludge in some way. Other things can be kludged. A model of a human being made of plastic can still be used to con the police into letting you use the carpool lane. You don't need an android to do that unless they stop you and try to interview the dummy. This seems similar to me. I can get Eliza to talk to someone on a message board and it takes them ages to get wise to it. |
|
|
I have never bought the analogy between brains and computers. The whole thing is a huge mistake. Therefore, whereas it might be possible to make an artificial intelligence device from a functional equivalent to a brain, that would be the hard way. The point i was trying to make is that something can look like a human without actually being anything like one. The point is to design something which can convince a human, with suspension of disbelief, that it´s like a human. This sort of thing is old, examples being palæolithic figurines, Da Vinci's robot and early modern automata. This has nothing to do with neurons. If you want to make a doll for practicing CPR, you have to make convincing lungs and ribs, but it doesn't need a spleen or kidneys. |
|
|
Re: Games unimaginative and boring - have another look. I've recently started playing home console games again after being frustrated with them, and have been thrilled with their depth. The last three I played (all sequels - GTA 4, Fable 2, Fallout 3), were genre games with a first-person fighter as lead, but all had "karma" that allowed you to play as good or evil, with the game world changing in response (i.e. it's not just a point score); a long story arc combined with smaller subplots that require different skills; plenty of exploration if you chose to; and in-game characters that become more or less attached to you in response to your choices. The storytelling still tails the graphics, but it's much, much better than I thought it would be. |
|
|
You could be right, [Jutta], and i probably jump to conclusions. I originally got interested in computers because of the graphics, and to some extent that carried over into the more recent games, from the middle of last decade onwards, but the level of originality seemed to decline. I may well be wrong about that, but i was more into the likes of puzzleless adventure games or Tranquility, so maybe not. If there´s something new out there like Douglas Adams' 'Bureaucracy', i'd love to find it. Then again, my son's taste in games may be poor and i see those more than others. Speaking of which, if a TRS80 or PET could manage a vaguely convincing level of AI with the Z-machine more than three decades ago, could that not be put on a single chip today? |
|
|
Yes, i'd heard of that but not in that much detail. I am rather hazy on how it does it. Is it actually still part of a graphics card? |
|
|
I think that the core of your idea is a very reasonable one. As far as I can tell, what you're really pointing out is that we have add-on cards for graphics that are optimised for calculations that include 3D geometry, matrices, and so on. Yet we do not have such a thing for artificial intelligence. |
|
|
The first point that [jutta] makes is valid though - hardware isn't the issue because you need to know what your algorithm is going to be before you can create optimised stuff for it to run on. |
|
|
Your idea contains the first stage towards that though because you identify aspects that might be useful (such as hardware implementations of chatbot and optimisation for list type programs). |
|
|
I don't think that this is enough though. Graphics are easy, in many ways, because it all boils down to simple rules. Sure, many thousands of simple rules, but simple nonetheless. |
|
|
Maybe a 'simple' AI system based on neural nets could be implemented as an add-on card - and the game/application could run it's own training system to prepare the hardware. That would require new training for every application... |
|
|
Raytracing on a graphics card? Err... Surely if you can tinker with the pipelines, that would slow it down? Doesn't it need some kind of hardware to do that? Some of this can be done easily. For example, if the formal logic is done in RPN, a data bus can carry a string of variables which then sit on a stack, then the operators in the right order, then it outputs the same variables on the data bus when requested. It surely isn't that hard to do conventional Boolean algebra with a stack-based processor, since they're made of the stuff. Beyond that, a pseudorandom number generator can switch the variables between logic gates and you get quasi-probabilistic logical operations. Correct me if i'm wrong, but isn't a lot of the work of a graphics card matrix manipulation? This is also about matrices. The truth value is represented by a series of coordinates, and that's a matrix, which is then manipulated by the logical operators. The simulation involving rotation, reflection and the like of objects to be tested for congruence is also a series of matrix operations. Other things aren't, like the list processing and the knowledge representation, but isn't that a start? It does seem that a lot of this could be done with a GPU, given what you've linked, but there's other stuff. Yes it's halfbaked. |
|
|
Later: Thanks, [boysparks]. In the area of logic, the floating point representation of colour could be useful. Looking at the exact algorithms, i suppose i could maybe see a way through, though being terminally stupid is a bit of a handicap there. |
|
|
ok, so a GraCa is a board with memory and processing
capacity dedicated to rendering video output and adding
processor rich effects to that output as the software
dictates. Producing random clouds of smoke, fog, rain, and
doing other superficial overlays to the output of the
software and integrating it seamlessly (seamless because it
all happens on the same processor). To have a card that
renders AI you need to describe the content of the input
and the content of the output. What data is the card
processing? How do you make the card serve multiple
applications (as a sound or video card will). I suspect that
your idea could do little more than a random number
generator (which used to be an independant piece of
hardware). Anybody else see my point? A GRA-CA takes a
small pipe of input and turns it into a large pipe of output.
To make sense from a resource perspective a AI card would
need to do a similar thing, processing a small volume of data
(text: Dog bites man) and producing a large output (image
of a dog biting a man) OR the inverse. Please explain, in
that context, what you would expect this invention to do. |
|
|
I missed this one the first time around - and, like Jinbish, I think this could become viable if the card somehow supported various perceptron models; single-layer or multi-layer feed-forward perceptrons of different (standard) configurations. |
|
|
You'd want to be able to quickly load and save various node-level configurations in a "training buffer", as well as having some sort of training loop storage. You'd want to be able to quickly map inputs and outputs to memory locations on the board, and have active learning and "old-dog" modes for whether you want a particular arrangement to adapt to changes etc. |
|
|
None of that is going to allow you to converse intelligently with a chat bot (at least not yet anyway) but you might be able to train an OCR device, do basic face/voice/biometrics recognition, indeed any application where complex data needs to be compared against "target" configurations (where those target's archetypicality is not always easy to pin down algorythmically) |
|
|
Somwhat disturbingly, this "neural hardware" chip has been done, using the best and most readily available neural material known to man; rat's brains. |
|
|
2022 update: user nineteenthly gets his/her wish.
nVidia RTX cards have been around for some time now. The
RTX line has "tensor cores" that specialize in matrix
multiplication for accelerating deep learning tasks (4x as fast
as CUDA cores). They're behind the algorithms that make DLSS
upsampling possible. |
|
|
...but they're still not good enough to make an acceptable AI given how little effort developers are willing to put into it. The real problem is the jump to acceptable AI from scripted encounters is HUGE, and no company is willing to risk spending that much making it happen for the first time. Not even companies making 4x strategy games are willing to build decent AI. |
|
|
Here's an example: In an open world when should my artificial companion start sneaking around? The easy answer is sneak when the player sneaks. Dead simple to code. The real AI solution is to understand when sneaking is probably beneficial. Enormously expensive in terms of coding and computing power. |
|
|
Or handling equipment. Just hard code equipment sets and call it done or have NPCs capable of strategizing based on the player's choice of gear and instructions, the environment, and the enemies it seems likely they'll encounter? I think it would be incredible to have an AI companion that's smart enough but would enough people agree with me to make up for the higher price or lower quality graphics? |
|
|
Not only is it a big risk, but the AAA studios are doing everything possible to minimize risk. Collectively they put out about one innovative game every 4 years or so, and one revolutionary game every 15 years or so. |
|
|
Wouldn't it be cheaper and more effective to pay humans on low wage parts of the world to pretend to be AI? |
|
|
//Today is 2/2/22// Arse. I had it in my diary to post on the special chime idea. But I didn't look in my diary all day. |
|
| |