we work on free software, we are interested in critical views of technology..
we have been working with a group of 50 artists on the first floor..
the topic cqrrelations.
....waiting that the apparatus is explained... 
...
....
:)
if you are wondering what happened over there...

I would like to invite Silvio, Dave?


Hello
Myself and silvio for the last couple of days we have been workin on it.. as a speed project.
it is a response to some software, called Pattern

we have been looking at sentiment and modality assessment..
it could be a whole book
how objective the author might be

objective is already a philosophical mind field
so, pattern writing coach, is linked up to a script
if you are annotating this talk
annotating this talk on the etherpad this would take the last line of text.. and be judged by the coaches..
this will take the line of the text
what do we got?
a critic counter!
and then we have the diarist
share your feelings
not be too afraid
not be too afraid
to show ytour true self
especially looking for emotions
she is looking for emotion
we didn't draw ourselve we didnt draw ourselves
I think that is it
you are observing us all night long?
everything we say will be algoritmitisized?





these guys are artificial intelligence
watch what you say!
there's a warning
there are four academics on the stage
three of these tend to speak rather fast
three of these slog to mutilate rather fast
because i was digusted by law
the way how techniques shift the way we relate to the world
there are signs without signification, without meaning
there is nothing left to represent, there is just data
we forgot the specific issue of today, discrimination
they dont relate yo anything that people can feel
categories are not like ethnic categories

we live in the epoch of the database

there is no national discourse, we are all niche-marketed
work through the political system
instead of taking data, 'what is given', let's use 'capta', that that we choose to take
when we work on the classification of diseases..
the capta that we have
is based on specific theories...
we are building up medical system according to the data produced..
the particular problem here.. 
if youre not being captured by big data, you're even more invisible
it becomes even more distriminating.
eugenics never went away|
its been still alive and kicking instead..
what if we can use big data to predict schizophrenia..
thats kind of problematic..
to 'stop it before it starts'.
we are using big data to performativ ea particulare normative society
to some extent we can challenge big data in insteresting ways.
thats why cqrrelations.
hardcore analytical philosopher.
getting upclose to the techniques
spent lot of my time trying to learn
in the community of data miners in new york
much of my work has been to bring to earth some of the assumptions
and omission involved in these kinds of practices
consequences often look like discrimination
most of these techniques are supposed to be for discrimination
i spent some time recently how these kinds of assumption that often look like cases of discrimination, are discrimination
largely data driven decisions
prejudice, bias in decision making.
there is worries about these techniques, but also enthusiasm.
employment for example looks excusively at those methods.
either consciously or unconsciously
there are serial potential for these tools to address these problems
even seemingly objective ways can nevertheless be vulnerable to other problems
i'll mention a few
two ways to be discriminatory
to say one is subject to discrimination, and it shouldnt. type 1 error
not accurate inference
second set of issues dealing with the process being too precise
able to carve through population 
but it might be simple a way to reproduce the problems in society
while already being in disadvantage
accuracy of these models.
we need to be very suspicious to what it means
for something to be accurate
people who apply this tools, what do they mean with accurate?
the labeling of the examples is objective.
a matter of fact.
to rely on labels as matters of fact

Breslau who interchange this tools , what work up they fall over backwards with accurate ?
subjective decisions
on what is a good employee
what is a bad employee
while building models, we rely on these annotation and in order to test the model, we test on things from which we know it's positive.
you do not mention datamining or machine learning
datamining is something that you mind
the word we chose in describing this problems determines how we engage in ti
you brought other elements!
can you say something about selection of these words?
who wants to go first?
the idea behind the words I use, is that they denote a translation or transformation in the use of studies
it is a LITTLE LITTLE bit technical
Robert Musil is the famous author of the man without qualities
you can rea dhis fascinations with statistics
if theories of probability exist
traditional staitstics were selective
eveything that was far from average was considered as noise
in big data there is illusion of exhaustivity
traditional statistics were valid only for great numbers
with big data we can take into account wha tis the most far removed from the average
idea of average or 'the normal man', the most common are completely avoided in big data
with big data we can make do into lucrativeness wha tis the most far removed from the average
it seems anormative, without norm, beyond critique of course
they are always results of onventions
in old days statistics, mainly women had the task to put the categories
where do we put that woman who escapes the category? 
they discussed that amongst themselves!
they could be taken as targets for public contestation
contestatibility was inherent in classical statistics
now we escape this burden
it appears anormative at the same time
the average man was the ideal man!
he had most qulities of human kind
for some it was the complete opposite
it was felt as oppressive
this shift to automatic datamining is emancipatory
the categories do not preexist to collection of data
they emerge from the data
this refers to the dreams of the sixties
peace & love!
Felix Guattari, Gilbert Simondon
idea of dissolution of objects and subjects, replaced by processes of individuation
replacement of objects and subject by processes
it looks emancipatory
I have to convince myself that it is the opposite of emancipation
algorithmic government does not target the present, the individuals
their objective is to neutralize the virtual, the potential, the things that are not yet there
they beat in marketing, surveillance
life is recalcitrant to any attempt of excessive organisation anyway
that's a bit abstract
shall I tell you why I think it is not emancipatory?
no
do I agree or disagree with what you say?
every age says 'we're different'
a good worker
the definition of good worker feeds back in
the moral categories we are using..
..deviant etc remain constant during the last hundred years..
transformation in statistics has being key
how we analzye the data, the huge amounts..
this wonderful concept
that bruno latour has..
the oligopticon
we are moving away the foucauldian notion of the panopticon
that what state wants to do is seeing everything
all the time, on all citizens
somehow we are not moving in thta big brother world..
oligopticon is something else
to take a small amount of information
that is enough tp control you
there is a strong notion of what constitutes normalities
the accepted behaviours
going out of which, you get punished.
may I?
yes you may
of course it does not replace other forms of governmentality
its the justaposition
of course we have standards of normality
how do htese standards evolve?
they evolve in a viral manner
deviating from profiles
the algorithm will not detect the deviatio nas failure
just another occasion to improve the pattern
improve your profile
there is a lot of performativity
a rebours!
nevermind what you trying to do. there will be nothing like an event, breaking out of the profile
not conceivable in the digital universe
a mere occasion to render the system more efficient
the more you disobey, the more you participate
it does not go throug the visual episteme at all.
we have been rendere invisible to what counts for us
benjamin has the desription fo the bourgeois interieru of the 19th century
souvenirs, cushions, etc
the consolation for the growing anonymity in the public space
the assurance of still existing
with small objects in the private sphere
replaced by blogs facebook etc
a subjective interpretation, of course!
not by exhibitionism etc.. its just they need the proof to have an interiority.
temporaryu aggregates of data, manageable at an industrial level
thats the very paradoxical situation
a fantastic book
about how las vegas uses algorithms
to evacuate the interiority
they turn you in a collection of data points
what they want is to put you in a zone
where precisely you have no subjectivity
if you take gambling as the image of our time.
most of the people in this room are studying.. ?
big data
big data is marketing speak
general purpose artificial intelligence
the oriignal work was based on logical expressions
that would culminate in "consciousness"
hardcoded lines to describe reasoning
with enough data, better algorithms!
quickly it became evident it was a failed enterprise
part because of the program wont work
people spread elsewhere
teaching machines by example
thats rather than deductive reasoning.. just show enough examples
after showing enough examples, it learns the feature of a cat
a process of induction
what is important about this is that there is very traditional categorization
a good worker
a bad worker
defines a set of examples
the feature of the good worker can only be with categorizing all the examples
is the kind of category that depends on
human subjective categorization
but its producing something alien
the category is indebted to the human
but is something alien
reasoning distinct from human
generally speaking the whole atraction is to depart from how human think
new kinds of things that arent categories
can you give an example?
the employment example I learned recently.
TERRORIST
you can either pass over the case, or consider an innocent as such
what is even an example of a terrorist?
people that want to use these techniques have to justify the categories in first place..
rather than harcoding the definition of terrorist
you set some examples as ground truth..

when the CIA decided falafel eaters=terrorists...
these techniques that are learning from examples.. mean that we can learn only things with good examples
for the majority it might work good
but if you are part of aminority group..
if you are in the segment of the population out of the norm
you won't be assessed.
these techniques are directly proportional to the population dealing with
employment categories might be terrible if you have credentials from south america..
just because they cant be assessed.
problems perceived as necessitating strong intervention from the state
are now being refigured as things to be solved with more data
data is contributing to the retreat of the state
and the dismantling of welfare
example?
in uk. the way to deal with global warning..
nudging http://lib.estrorecollege.org/view.php?id=425109
you use more energy than your neighbour
that should encourage you to use less energy
instead it could be influencing decision making..
evgeny morozov. click here to save everything
extending the critique to this kind of solutionism.
the results are pretty lousy. how much of this is simply simbolic?
how much is just showing off?
they now create radio stations that automatically define what music you want to listen to without music categories
there was a huge push for big data, whether or not it would work.
it wont go away, doesnt matter how successful it is.
good ol fashioned AI is still getting money
whats the impulsion that gets us to go for these techniques?
even though they arent working? though some are..
whether it works or not is not the question
we have given away the exegesis of truth
its a truth regime in a foucauldian way
whether it works or not, the predictions will impact the reality
it creates the world of today, its behaviours
a form with a lot of holes
it doesnt contain you completely
but traces the future trajectory
in blurring ways
doesnt matter if it works..
digital market manipulation digital market manipulation ryan calo 
new techniques of personalized marketing
the goal of the practice is not to make you buy something directly
but to discover what time of the day you are the most supceptible to some triggers
resistance is something that gets tired
the more you use it the less you have it
by refusing offers for 6 hours, the 7th you will click ?
we are not even the authors of our own desires
impulses. alerts to react to.. reflex mode rather than reflection
in real time.
the capacity to recognize oneself in the motives of our actions..
i will buy the chocolate
i will eat it as quickly as possible.
am i autonomous?
i would have wished not to be motivated by such addiction.
transform my pulsions into desire.
very important capability.
lots of definitions around
is it just running algorithms and finding the result, or looking at a database?
 personally i think that profiling and AI is a pharmakon..horrible word?
sometimes i have to correct lot of nonsense..
i must admit that at 23 i might no be as objective as the students writing
there is software that after correcting the first 50, learn how i correct.. it checks the next 50 and i can correct those.. reinforce what is going well.
i'm convinced that we will be using this.
somebody said that this will be horrible.
it's very important that we move from thinking that these machines are using us, and that we are using them.
have a conversation with them.
they are anticipating us.
they have no interior..
japanese animism.
people deal with things as if they are animated
we can learn that to contest the machines but by interacting with them
we can learn that to social function the machines but by interacting with them
people are tired of making choices
what a fantastic point!
one of the skills we don't learn in colleges is critical reading of databases
it is a core role of the humanities in the future
if you want to contest a database, you need to understand what kind of queries work, what misses
I like the idea of the top 5 and bottom 5%
grading is always a subjective phenomenon
the problem is funding for education
we need to challenge the problem
it does work providing we've got the skills
what are the skills that allow us to question and challenge?
lots of the public say they can't understand
YES, YOU CAN
it is like before the printingpress
we need to democratize the process
there is distinction between supervised (with examples) and unsupervised learning (show me the examples you find)
there is new approach: deep learning
it is a breakthrough
Google was able to train a machine to recognize cats
it was able to define the features without anyone telling them what they might be
this is applied in speech & image recognition
it is possible with hugh datasets
will be quickly adopted in commercial applications
this idea of interacting with the model that you've trained
even if you cannot have a comprehension of it
allows for something else that is interesting
tempt to bring some kind of objectivity
techniques can mutliply the interpretations of something
bring new perspectives
reflects something you don't recognize
make you learn something about yourself that you didn't know
if personal information is involved, this is dealt with in the law
I would use a software for automatic recommendation
but you should keep for yourself the capacity to justify the grounds on which you have trained these systems
if they make us believe that they dispense us of decisions and errors
everything is forgiven in advance because it has been recommended by a machine
imagine a context of the prison, risk of recidivism
there is much more pressure on the personnal
you praise objectivity which I praise as well
there is elegance in algorithm
the marionette's theater, Kleist
why do you go and look at dancing marionettes, because you dance so well
because the grace of an inanimated body is so much higher
because they're not affected by emotions
that is why they're naturally balanced
in order for humans to succeed this
they must be or completely ignorant or they must be god
we have the impression of having all the data
and at the same time we do'nt know
will datamining be more often in a relationship with us where it dominates us?
or will it be used as tools that anhance us?
what kind of relationship do we want?
we've been looking at the figure of the annotator, the person that scores the training data
in that difference between a conversational relationship and training relationship
there is the difference how you can relate to the tool
think about the context this software has been produced in
the political ideology of the persons who annotate, or do the grading
the conversation should allow you to stop
as a feature inside the software
thanks
I share your sentiment
wht is dangerous about discussion that focuses on the performativity of these toold, we focus on the foreground, not the background
predictions will prove to be true if the background conditions will remain the same
we're failing to ask: why don't we focus on the background of these tools
that some people have more difficulty to access to good education
government by the fact
facts are governing us
facts being data
without explaining why these data are how they are
the fact has not by itself any value
it presents the background as something natural we cannot change
the neoliberal society as the natural landscape
some people prefer to have algorithmic discrimination because it will allow them not be screened out
the most normal people in this case
the tiranny of majority
algorithmic screening will increase indirect discrimination against the most vulnerable people
that is convenient for the majority
what is lost here, is the idea that pretended objectivity is not judged
and the possibility to contest the categorization
traditional mechanisms can still apply here
we can perform research that shows that a part of population is burdened more than others
it is important that analogue techniques can still do that
some seemingly neutral decision has impact on part of the population
people who can assess these decisions, their impact, can do a lot of good work
it will be difficult
because algorithms use different kind of data
some are structured, some are less structured
there are different logics implied
when machine learning is involved things become more opaque
it needs time to prove that it is discriminating
you can look at actual decisions and run experiments on that
social scientists are happy to treat decision making processes as black box
how does this all afffect ways that human hebaves?
using categories schemas etc, we still keep thinking in that terms..
we are ? using these for thousands years ?
even as a children, continously asking whys..
we try not to go into a system. but it will make mistakes
the moment of the decision
if a prisoner would be released or not
this gravity moment, that carries some autonomy, the decision has to be attached to knowledge
otherwise there would be no autonomy.. 
the iron cage of bureaucracy
prescriptive for human behaviour.
lets imagine a discrimination on the base of an algorithm. who do I sue?
state/company/coder/etc..
who do I shout at? 
:a very sexist slide. language use by female and male users of social networks
shopping/cute/cats vs fuck/football/sports
who do we shout at then?
the nature of knowledge..
one condition is very connected with ethnic categories..
but ethnicity is not a genetic category. so they deliberately produced other graphs
no mention of ethnicity.. interpreting the graphs reinserted them though
it's very hard to get out of these categories.
one of the problems is: 
    what does it mean to be human?
seeing myself to be a unit might be a category error..
i'm being created by a system.. we should reconsider
the claim that data is the end of theory
i want somebody to shout at.
it's behaviourism gone wrong..
isome similarities. 
first step: don't assume you are autonomous
then start to reconstruct the causalities etc..
//has to deal with the phenomenology of the machines..
i like this tumblr where face detection is applied and it finds non humans.
what is interesting is that then you kind of compare the times that humans find faces as well and some that are completely absurd.
reveals something of the phenomenology of machines.
not treating classifications as errors or symptoms of something we dont have access to
another important study related about accountability/shoutability
itold by a colleague that name on google suggested an arrest record
using distinctively black names, there was a difference, in being more likely to have arrest records
than with white names.
by multiplying the amount of advertisements and the likelyhood somebody would click on it..
so people who have been exposed to these ads were clicking.. 
so prejudice of the users filteres as prejudice of the algorithm!
who is responsible in this case? users/company/algorithm/robots?
a minimum discrepancy when advertisement have to deal with race..
unclear who is responsible
google corrected that "mistake"
about human stereotypes.. they live long that's for sure.
but in fact there is a competition between two semiotics.
the semiotics through which people experience discrimination
and the machinical layer as well, with a-significant semiotics
made of 1 and 0
writing degree 0
there is a gap
two modes of expression and experience
can we call stereotype somethgin made of 1 and 0?
001010101110101010101010101010101
we want reality. not realism, reality as such. we don't want to touch reality yhrough language. 
the refusal of representation as such, and all mediations. too late compared to real time facts.
we will keep stereotypes at human semiotic level.. but the other leve lis something else.
about the role of knowledge in decision; uncertainty is what gives decision the value
politicians obviously forgot about this.
the gesture of decision always fails. the tragedy of failure we want to expel from human experience..
errors failure we all die. 01
i completely agree. algorithmic normativity appears as anormative
consequence and naturalizaiton of social normativities, that become unspeakable and unvisibile
cos they have been translated in 1 and 0s
1010101010101001010101101
don't rely on unreliable sources