h a l f b a k e r yInexact change.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
We are having elections soon in my country. While
being
caught on a hook and pulled in by a clickbait site about
ten
worst dogs you don't want to meet, I got an
advertisement
with one of the candidate's picture and name, which
got
me thinking.
One could tackle their adversaries by
checking which
context could really spoil their message. What
inappropriate environment could ruin the image. Then
pay
the broadcasters to play the advertisement while
watching
the wrong team or as the leading advertisement for
Watch: Ten stupidest things people do...
Facebook "Audience" API
https://developers....s/custom-audiences/ The old (pre-nonsense) Facebook API allowed you to cast your own nets by submitting a query along the lines of {Find users who like dogs who have at least three friends who are unemployed and have a sibling who attended college and is friends with a Greenpeace activist} - Now you secretly/securely send them your customer list (explicitly maintained by you for legal arse-covering reasons) which they will consume, match and augment with other like-minded individuals - importantly without explaining how or why since that would be illegal. This makes hacking the social-network side of the algorithm difficult to do, since they've long since hidden that away behind multiple layers of plausible deniability. [zen_tom, Jan 26 2021]
Facebook "Special Ad Categories" Blurb
https://www.faceboo...helpref=faq_content Basically listing out what you're no longer legally allowed to do and what may well have been illegal in the past, had regulators of the time been capable of policing what have been a series of high-profile gaming of the open-ended system. You don't publish this level of control to your API unless legal have explicitly forced your hand. [zen_tom, Jan 26 2021]
GDPR: Personal Data Definition
https://gdpr.eu/eu-gdpr-personal-data/ Personal data means any information relating to an identified or identifiable natural person (data subject); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. [zen_tom, Jan 27 2021]
[link]
|
|
I may be out of date on this, but I think the mechanism in
play is something like this: |
|
|
1. Party A writes clickbait. In order to monetize their
clickbait, they need advertising revenue, so they turn to
Party B. |
|
|
2. Party B sells exposure to an advertiser, Party C, and uses
some kind of machine-learning algorithm to decide which
advertisements are displayed to which suckers on which
click-bait sites. |
|
|
3. Party C is the politician you don't like. |
|
|
So, to implement this idea, you'd have to hack the
algorithm used by Party B without the consent of any of the
three parties. |
|
|
[zen_tom], what do you think would be the most efficient
way to do this? |
|
|
Which Party is the Party of the First Part, though ? |
|
|
Missed the first part of the party, [8th]? That's about par. |
|
|
We were distracted by looking for the Sanity Clause... |
|
|
That's going to be a whole nother 11 months. |
|
|
What's the problem with paying Party B to adjust their algorithms into "joke mode"? |
|
|
All algorithms have "joke mode" built in, right? |
|
|
I mean why would any mathematician or coder write an algorithm and forget to include "joke mode"? |
|
|
The other easy possibility is to just buy advertising slots yourself, supply campaign material related to your opponents, and specify that they are run next to clickbait photo-stories about contagious diseases, dog vomit, insects and fires. |
|
|
I'd be suprised if this does not already happen. The world of targeted online advertising is pretty opaque, the Generic Algorithms that run the demographic targeting are pretty specific, and the whole scene is fairly shady and hidden. It would not be hard to buy social media targeted ads which place Trump advertisements next to fart stories. |
|
|
Hi [pertinax] I have absolutely no idea, but I don't see why that should get in the way of having a rambly and ill-focused go. |
|
|
Party B (which let's assume is the internet in general, but most explicitly, Facebook) will collect information on aggregate behaviours and who, pre-nonsense
would have sold that raw information on to 3rd parties (C's campaign people) to do with what they liked. C's team would have done thier own machine learning
to find ways of segregating that population into groups that would be most receptive to a particular narrative and tailor their ads to be shown only to each
sub-group in relative secret - and quite likely issue false-flag advertising to the opposite effect. This allows tailored lies to be expressed which wouldn't
be exposed to the public scrutiny that would normally reveal their contradictions. The initial effect is to have some (probably only marginal) sway on
opinion, but more importantly and more widely, serves to erode people's ability to communicate using a common set of shared truths. i.e. it serves to remove
the concept of objective truth. |
|
|
If finding conformation of your personal biases is just a click away, then everyone can feel validated and safe in the comfort of their unchallenged point of
view, whilst simultaneously feeling anxiety at how easily led and lied to everyone else has been. I don't think that was an intended consequence on behalf of
Party B and the internet in general, but it's turning out that way. Once you erode objective truth, then the door is open to building your own media presence
that generates content that supports and maintains a particular narrative (or to promote narratives on behalf of 3rd parties willing to pony up the monies).
This model is quite traditional, but certainly you don't have to look far before coming across openly partisan media organisations such as Murdoch's Fox. |
|
|
After all the social media nonsense, regulation such as GDPR or CPRA came into force such that now the underlying data are no longer allowed to be shared
anymore on pain of *comfy chairs* level of inquisition. And separately, political advertising these days needs to be transparently funded - though with the
existence of offshore banking, "transparency" is very much notional in practice. However, if large partisan media organisations exist that due to the mix of
entertainment and content don't get classified as "political" then they can push whatever stories they like, and continue to farm their own audiences without
having to disclose any of their funding or conform to any political advertising controls. |
|
|
So I'm not sure hacking algorithm B is the way to go anymore - the industry has* shied away from sharing these details - but also from using it for anything
useful (at least in GDPR/CPRA jurisdictions). It's more about constructing world views and tending your mass-media congregation. Most of those algorithms are
just self selecting - so if you liked w or x or y, then you're going to get fed more of the same stuff by the same content providers. If you already created
w and x and y, then you don't have to do anything clever to produce z and expect it to be delivered into the timelines of your existing audience. If that's a
hack, it's a crude one. |
|
|
If you wanted to subvert that on neutral platforms, then I think you'd have to brute-force it by paying for un-sanctioned associations - marrying up the ads
with inappropriate groups of people - to do that you might need some kind of Tufton Street style "think tank" operation through which you could false-flag
your work and obfuscate your funding. Things like the creatively named "Tax Payer's Alliance", "Institute for Economic Affairs" and "Global Warming Policy
Foundation" already exist to perform these kinds of Trojan Horse functions where extreme partisan view points are injected into normal life by seemingly
neutral and objective organisations. |
|
|
* pure conjecture - zero evidence on this - though certainly in Finance, usage of behavioral data for purposes not strictly agreed to by the individual is
dangerously toxic and cause for huge anxiety and waving of arms. There are strictly limited uses like for insurance or credit - but the anxiety is such that
any innovative use of personal information leaves a company open to existentially threatening legal action. If you're a sham front organisation for a
particular international political lobby, that might not be such a strong motivator. |
|
|
//I don't see why that should get in the way of having a rambly
and ill-focused go.// |
|
|
Thank you, [zen], for that is how we roll. |
|
|
//without explaining how or why since that would be illegal// |
|
|
Which law would be broken by an explanation of how such a
data set was augmented? |
|
|
Hi [pert] GDPR defines personal data as content relating to an identifiable person -
importantly it covers identification both directly (data associated with a name and address
for example) but also, specifically *indirectly*. Which sets up some
thorny issues. Say there's a town with a population of 1200. In that town, there might be 100
home owners with blue front
doors, and of those, 20 have children. Of these, 5 might belong to a particular athletics club,
and of those 1 is blonde. With
this loose collection of facts (town, parental home-ownership, photos of people and houses,
facebook club page
membership, hair colour etc, it's possible to identify an individual, and so public access to
those facts in combination needs
to take that consideration into account, even if in isolation none of those facts is enough to
identify someone. |
|
|
So in the Facebook case - they have to be careful what clues they reveal about an individual,
even if that individual's name
is obfuscated or hashed-out, since it's possible to construct an identifiable profile from those
otherwise innocuous details -
and so the supplier of those innocuous details becomes responsible for their protection, and
liable where that protection
isn't provided. This (I guess) is why they switched from being able to query these things
individually over their API to
submitting big lists of records and returning expanded audiences in aggregate since (they
would hope/argue) it's too
difficult to determine the specifics of any one individual and how they were considered part
of the "in" group. |
|
|
To further complicate matters, whether something is personal data (and therefore becomes
subject to regulation) or not depends to some extent on the intention of the person using
that data. |
|
|
As a piece of regulation it does pose a lot of problems to people who have to deal with it -
but in my opinion at least, it does serve to address the most glaring concerns any individual
might have about how their information might be used by third parties - rather than just the
usual moan about the extra cookie-popups we have to navigate whenever we land on a
clickbait site. In the background, there's a great deal of infrastructure and change that's
serving to "do the right thing" which without these regs would still be in wild-west land where
the biggest database could mean real and tangible impingement on civil liberties. |
|
|
My proposal was exactly that. You supply lame
advertisements
for the opponent, and have them shown at the wrong
moment,
by targetting your "fake" content to contextually bad search
words. |
|
|
Just like you can get your Electric Bike store up in front of
people who
searched for electric bikes, likewise you can get your
opponents ad
shown when people look up "10 most stupid things people do"
and it brings up "I VOTED FOR..." and your lame slogan with
the pic of that candidate. |
|
| |