add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
So, let's say you are reading a paper, have ideas and
annotations during the process of reading. You click (or point)
to the location, where you want to add an annotation. The
system takes the context of the location on the paper (e.g.,
the
reader extracts large enough context of surrounding
words or
sentences, which uniquely identifies the location, and allows
later display the same annotation around the same text in
other formats - be it HTML on the web, or other. If that's a
picture, then the picture features are extracted via, and the
pixel location, allowing to display the same annotation on top
of
the same image in other formats). Essentially, we would have
the context IDs and coordinates with context associated with
feature sets, with 1:1 correspondence between context IDs
and
feature sets, and 1:many correspondence between context
ID
and annotation.
Then, who-ever reads the paper, in what-ever reader, they
could load public annotations, browse their history. This would
be nice to have a conversation per annotation. E.g., each
annotation creates a possibility for thread of comments.
Inside
the comments, you could refer to other annotations.
Moreover, each paper would have its paperid generated
based
on the feature extraction from the paper's text, especially
title,
summary, and, if there exists, just use the DOI. It seems
good
to make such system as widely usable as possible, not just
for
scientific papers, but for any PDFs in general.
Hopefully, this would make reading papers not a lonely
activity
at all, and cross-pollination of ideas lead to many new
developments.
[Cross-posted from the Infinity Project.]
The Infinity Project
http://infty.xyz/g/68/en Problem: "Academic research papers are hard to discuss on-line." [Mindey, Aug 29 2016]
Memex
https://en.wikipedia.org/wiki/Memex You're almost describing some aspects of the Memex [hippo, Aug 30 2016]
[link]
|
|
> allowing to display the same annotation on top of the
same image in other formats |
|
|
I assume you mean in other formats of your annotation, and
not in other formats of the image. If so what formats did
you have in mind? |
|
|
And thanks for introducing me to infty. |
|
|
[pashute], yes, exactly. I mean, in other formats of the
document. However, since the image quality and way of
embedding might be different in those formats (e.g.,
someone might embed an image as an .esp into LaTeX
document, while leave it as a JPEG in a HTML document),
we would probably have to extract the picture features
form what-ever the format image is in, and compare the
features rather than raw images. |
|
|
// extract the picture features form what-ever the format
image is in, and compare the features rather than raw
images. // |
|
|
You want a perceptual hash. |
|
|
One would think that this would also work well on things like Youtube videos, and in fact would be working well given that the mechanism is set up. |
|
|
Take a look sometime at the annotations the public produces and appends to Youtube videos. |
|
|
The problems with YouTube's comments are these: |
|
|
1. YouTube ranks videos, it seems, not by difference
between likes and dislikes, or percentage of likes vs.
dislikes, but by TOTAL of likes and dislikes, based on the
idea that controversial videos are popular, or something.
Then they decided to apply the same ranking method to
comments, leading to terrible comments being ranked at
the top. |
|
|
2. YouTube hides all replies to comments except the most
recent two replies, until you click to see more, leading to
the appearance that the only replies, even to intelligent
comments, are things like "you must support Hitler then". |
|
|
So you support Hitler, then? |
|
| |