h a l f b a k e r y[marked-for-tagline]
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Yes, sorry - not remotely whimsical, just nuts-and-bolts.
Someone must already be working on this.
Machine Learning (the technology, the industry and the hype)
is
growing rapidly.
A hugely general statement, acknowledging that the term is
used very loosely as a trendy tech catchphrase.
Used
here in a totally broad and vague sense, due to a lack of
any specific knowledge on my part.
Growing in capability, size and application. Growing in
deployment, pubic awareness and marketing.
It's following the typical tech pathway from academic
research
to commercial tech lab development to geek hacklab
implementations to earliest consumer applications...
...and as it grows in usefulness (and consumer applications
begin to grow in number), the demand for ML-specific
hardware
will grow.
Google's Coral Edge TPU is now available as a consumer
product,
and certainly more will follow. But currently in a big GPU-
style
PCI card, and requires extensive technical knowledge to set
up. (edit... and in a USB stick format too)
Intel's Aspen Lake is a step towards on-chip integration of
different types of cores for different functions, but nothing
ML-
specific.
Exponential growth and spread of ML-based applications will
happen as they become more useful and useable.
So - your iPhone (a couple of generations down the line) will
need TPU(s) to do the latest cool "ML" thing quickly, and your
laptop will likewise.
And a few generations later, it will be integrated on the CPU
For those who fear AGI is just around the corner: It goes
everywhere with you, it does everything for you, and it will
know everything about you. And you will love it, because it
does
cool things. Quickly.
(post edit. so far, things like recognising people from their
voice, image etc., real-time physics modelling, object
detection/identification...)
This is the beginning...Linus Tech Tips
https://youtu.be/B635wcdr6-w Google Coral Edge in a consumer PCI card, with an application use case [Frankx, Oct 31 2021]
Please log in.
If you're not logged in,
you can see what this page
looks like, but you will
not be able to add anything.
Destination URL.
E.g., https://www.coffee.com/
Description (displayed with the short name and URL.)
|
|
Mmmm I imagine the new TPUaster which watches you move about the kitchen and analyses your fridge contents and shopping habits, so that it can pre-heat the elements just in time for you to put a slice of bread into the slot. |
|
|
//lack of whimsy// - yes, true. Perhaps I should
have started with an apology for lack of whimsy. |
|
|
//big processor to be made small// Ah, thanks for
pointing that out. Entirely improbable that over
the next few generations, processors will get
smaller. That almost never happens. |
|
|
//already working on it// I would expect they are.
If theyre not talking about it (and theyre not),
then its probably a good enough idea that they
dont want to share it with their competitors. |
|
|
Aye, but fair enough. Perhaps I wasnt crystal clear
that the invention is a new type of IC tailored for
compact/mobile personal devices and dedicated to
the specific processing demands of machine
learning. And probably the development of
operating systems that support. And all kinds of
control, task management, integration |
|
|
unless you thought you knew what's going on and found out
there's something newish that's been happening for quite
some time, and you are a halfbaker looking at old posts. In
that ccase you may find this idea, even though not whimsy
nor original, and possibly not even an idea in and of itself, -
interesting. |
|
|
I will be posting about my startup Overstand, doing artificial
comprehension (WITHOUT self-learning neural nets) soon this
year. Stay tuned. |
|
|
Machine Learning, or AI as it's now called, typically needs two phases for a supervised classification task. The first
is the training phase where models are built from data. This requires a lot of processing power and a lot of time.
It's not unusual for a deep learning model to require days of processing for feature extraction and fitting. Once the
model is ready based on an estimate of how it will perform on new data, it can be saved. The other phase is the
test phase. Some new data is presented to the model and a prediction happens. Typically, the test phase does not
require anywhere near as much processing power and happens somewhere else. |
|
|
I would imagine that a phone is typically used with a previously saved model to make a prediction from some new
data so there would not be a pressing need to actually build new models in the phone. |
|
|
Cloud infrastructure is what is used to build models. Like love-hotel rooms, you rent this by the hour. My own
personal best is 576 CPUs and I used these for less than an hour. I didn't look at the bill. |
|
|
What [DenholmRicshaw] said - the computationally intense part of a
traditional ML workflow is to efficiently encode a series of rules and
choices into a function that's commonly expressed as a relatively
high-dimensional vector that maps informational inputs (say image
pixels) onto a vector of semantic outputs (cat, dog, sausage, tank
etc) |
|
|
The learning process that tunes that function takes a lot of effort,
but once complete, the resultant bit of wiring is fairly light in terms
of processing - so it's not clear (given the current state of the
technology) how embedding a TPU at the "sensory" end of the
process would help much. |
|
|
Also props for your CPU highscore [DR]- I'm yet to top 32. |
|
| |