h a l f b a k e r yOh yeah? Well, eureka too.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
This kind of font would consist of a program or script to be used in generating each letter. Rather than every "A" looking the same, the letter would be slightly different in each iteration, with each iteration's letters designed to look good together. For example, a program could describe a slightly
curvy "A" with more of a curve on top, and with a background that looks like oil paint. The font interpreter would then generate a new iteration of that with each letter typed. Programs making use of the font would return the unique ID of letters immediately around the letter typed to the interpreter. The interpreter would then generate a letter that would have oil-paint-like strokes that line up together. Each letter would be largely the same and easily recognizable as the letter, but have more or less of a curve in different parts, with very slightly different line widths, and a different color or image gradient in the background if so desired.
The advanced version allows arbitrary degrees and types of inputs such that the interpreter run on a compatible machine could make brighter colors if the room is bright, automatically contrast against a non-font background, or even animate based on the user's probable mood.
https://en.wikipedia.org/wiki/TeX
[pertinax, Apr 28 2021]
lots of fonts
https://www.fontspace.com/category/unique [xandram, Apr 28 2021]
Explorations in machine learning and latent space type design
https://www.100arch...t-space-type-design Aha, thought it was too obvious an idea not to have already been done. There's a video showing a traversal over the latent space of an S which is kind of interesting. [zen_tom, Apr 28 2021]
Another StyleGAN walk over fontspace
https://twitter.com...74922050281473?s=20 This one looks quite neat - each frame returns the fontspace at a particular point in some n-dimensional space, the smooth transitions demonstrate a small step from one position in that space to another one relatively nearby. The result, an almost infinite multitude of fonts retrievable by some n-dimensional vector. [zen_tom, Apr 28 2021]
Beowolf Font
https://letterror.com/fonts/beowolf.html [xenzag, Apr 28 2021]
https://erikbern.com: analyzing-50k-fonts-using-deep-neural-networks
https://erikbern.co...eural-networks.html Source material for the twitter link [zen_tom, Apr 28 2021]
Frieze article on Beowolf
https://www.frieze.com/article/letterror [xenzag, Apr 28 2021]
[link]
|
|
Up next, combine the program with deep fakes: movies that can change voices and expressions to suit the viewer's desires. |
|
|
Sounds like a job for TeX (see link). |
|
|
maybe I have misunderstood this idea, but I
thought this was well baked. See link |
|
|
Do you mean that they are never identical letters? |
|
|
The latent space of a GAN trained on a bunch of different fonts might
be a good source of content (as well as being an interesting asset,
potentially providing a single encoding model for all fonts ever).
Assuming such a thing existed, you ought to be able to create a fairly
rich space to traverse, either randomly, or along some path that
started at whatever coordinates encoded "Comic Sans" and smoothly
walked its way towards the 1750's "Baskerville" font used by Isaac
Newton in his Principia Mathematica, perhaps taking a detour via
Windings along the way to shake things up a bit. Any smooth path
could be chosen, with the effect being a smooth transition from one
font style to another. |
|
|
It would be beneficial to test it on people to make sure it
did not have any effects on reading speed and reading
comprehension. |
|
|
Why not just use comic sans for everything? |
|
|
How far away (graphically? topologically?) from an "A" do
you have to get before a person won't recognise it as an "A"?
I guess that's what OCR does anyway, but I would think
people are better at it (context & other assumptions...). |
|
| |