Learning Goal: I’m working on a communications case study and need an explanation and answer to help me learn.
Part 1- Data
Come up with random data to analyze (do not mention it is random data, act like it is your data) the data should be google Account, Calendar, Chrome, Contacts, Drive, Photos, and Gmail. Take a look through Your Activity, especially your Location History, Web & App Activity, and YouTube history. And also Facebook data that includes posts, activities, events, interactions, groups, etc. try to think like you are a young woman interested in typical young woman things.
Part 2 – Analysis
Now that you have spent some time examining your digital shadow, consider your data and how it reflects you and who you are. Were you surprised about the types of information that have been captured about you? What do these data say about you and who you are (hint: specifically look at the advertising options that are being shown to you)? Does examining these data evoke any particular emotions?
Writereport reflecting on and analyzing your own data and connecting it to the concepts that we have discussed in class, especially those around the data-human assemblage. Readings are attached. The following questions may help you to focus your analysis:
• How do these data reflect you as a data-human assemblage?
• Is your data a companion species? Do data-capturing devices become companion species? If so, how or why?
• Is your personal data distinct from you as aperson?
• How does interacting with your data align with conceptions of subjectivity and agency?
should be 1,250 to 1,500 words in length (not including references), or 5 to 6 double-spaced #. Use APA style for your in-text citations and reference list. Be sure to include a header on each page with your last name and page number.
I tip well<3
Below are reading that can help: The first one is a class reading that you’re going to use for references
https://journals.sagepub.com/doi/10.1177/205395171…
If this does help you at all with creating/analyzing data- here are two articles my teacher just sent
Here are two articles that may help you interpret your Google and Facebook data:
https://www.teenvogue.com/story/all-the-data-googl…
https://www.nytimes.com/2018/05/16/technology/personaltech/google-personal-data-facebook.html
this assignment is very opinion based and it’s pretty much you trying to distinguish your data and what it essentially shows about you.
I have a more detailed outline if you need so please let me know and of course, if you have any questions as well.
Thank you!!
Rapid #: -18459234
CROSS REF ID: 843283
LENDER: C7F :: Main Library
BORROWER: IAY :: Daley Library
TYPE: Book Chapter
BOOK TITLE: Self-Tracking: Empirical and Philosophical Investigations
USER BOOK TITLE: Self-Tracking: Empirical and Philosophical Investigations
CHAPTER TITLE: Apps as Companions: How Quantified Self Apps Become Our Audience and Our
Companions
BOOK AUTHOR:
Jill Walker Rettberg
EDITION:
VOLUME:
PUBLISHER: Palgrave Macmillan
YEAR: 2018
PAGES: 27-42
ISBN: 9783319653792
LCCN:
OCLC #:
Processed by RapidX: 1/5/2022 2:25:32 PM
This material may be protected by copyright law (Copyright Act 1968 (Cth))
27
CHAPTER 3
Apps as Companions: How Quantified
Self Apps Become Our Audience and Our
Companions
Jill Walker Rettberg
Abstract Self-tracking apps gather intimate information about our daily
lives. Sometimes, they take the role of a confidante, an anthropomor-
phised companion we can trust. Humans have long confided in non-
human companions, such as diaries. The relationship between user and
app is structurally similar to the relationship narratologists and literary
theorists have identified between diarist and diary. Our agency is always
shared with the technologies we use, whether they are simply pen and
paper or a complex AI. By comparing apps to diaries, I demonstrate how
these technologies act not simply as objects but also as narrators and
narratees. While diaries are mostly silent listeners, self-tracking apps speak
back to us in a feedback loop and thus enter a role as our companions
rather than simply as our audiences.
Keywords Self-tracking · Quantified Self · Apps · Diary · AI
Narratology
© The Author(s) 2018
B. Ajana (ed.), Self-Tracking, DOI 10.1007/978-3-319-65379-2_3
J.W. Rettberg (*)
University of Bergen, Bergen, Norway
e-mail: Jill.Walker.Rettberg@uib.no
28 J.W. RETTBERg
IntroductIon
Self-tracking requires technology. Not necessarily digital technology, but
always, technology. Tally marks pressed into clay or scratched into stone;
paper charts with pens for making check marks and perhaps calculations;
smartphone apps that track everything a smartphone can measure: all
these are ways in which humans have used technology to create an exter-
nal, quantified representation of an aspect of our lives.
As long as the technology we use is simple, like a pen and paper, we
tend not to think of the technology as adding much to the process. But
we could not possibly remember the events we record in anything like as
exact a manner without recording them, even if the only technology we
are using is paper. If we think about it, we also know that the organisa-
tion of the charts we draw affects what we measure and how we think
about it.
When we use simple technologies, though, we tend to still feel as
though we are using the paper. We are in no doubt as to who is the sub-
ject here: the human feels fully in charge, at least in cases of voluntary
self-tracking, where the person doing the tracking is free to stop at any
time or to change the chart she is using. The human is the subject with
agency to act upon objects, that is, upon the pen and paper and the data
that the human collects.
This chapter is an examination of self-tracking apps that emphasise
the agency of the app through a conversational interface, where the app
uses simple scripts or more complex artificial intelligence (AI) to speak
to the user. Until recently, self-tracking apps have displayed user data in
lists or graphs, but as conversational agents like Siri on the iPhone or
Amazon’s Alexa have become popular, self-tracking apps are also begin-
ning to use the technology. Examples range from text-based chatbots
like Lark, Instant and Pepper, which send encouraging messages and
ask simple questions of the user, to speaking workout assistants like Vi
(pronounced vee), which is what Andrea L. guzman calls a Vocal Social
Agent (guzman 2017).
Telling our secrets to a simulated confidante like Vi is structurally sim-
ilar to confiding in a diary. Diarists often anthropomorphise their diaries,
addressing them as ‘Dear Diary’ and confiding in them as though to a
human friend. In this chapter, I outline a history of humans confiding
in non-human companions, from diaries to apps, in order to show how
our agency is always shared with the technologies we use, whether they
3 APPS AS COMPANIONS: HOW QUANTIFIED … 29
are simply pen and paper or a complex AI. By comparing apps to diaries,
I show how these technologies, or media, act not simply as objects but
also as narratees or audiences to our human narratives. While diaries are
mostly silent listeners, self-tracking apps speak back to us and thus enter
a role as our companions rather than simply our audiences. We don’t see
this to the same extent in social media, where we share content intended
for a human audience, using technology as a medium between humans
rather than as a companion or a tool for organising our data. This also
occurs, to a lesser extent, in other digital media—but it is more obvious
in self-tracking apps because they are designed to work without necessar-
ily having any other human audience than the user themselves.
trustIng our Apps
Digital devices are far less transparent to us than pens and paper or most
other pre-digital technology. Most of us don’t really understand how our
self-tracking apps work, and we’re not always entirely sure what they’re
measuring. Interestingly enough, this often means we trust them more
than we trust ourselves. José van Dijck calls this dataism: a ‘widespread
belief in the objective quantification and potential tracking of all kinds
of human behaviour and sociality through online media technologies’
(Dijck 2014). We may even trust our devices more than our own experi-
ences or memories. Studying people wearing heart rate variability moni-
tors, Minna Ruckenstein found that her informants changed their stories
about their day after being shown the data:
Significantly, data visualizations were interpreted by research participants as
more ‘factual’ or ‘credible’ insights into their daily lives than their subjec-
tive experiences. This intertwines with the deeply-rooted cultural notion
that ‘seeing’ makes knowledge reliable and trustworthy. (Ruckenstein
2014)
This surrendering of subjectivity or agency to our machines tends to
worry people. We trust the machine’s representation of our life more
than our own memories. Do we really want our machines to be writing
the stories of our lives?
Perhaps, though, we have never written the stories of our own lives.
At least not completely alone. We write with the tools we have at hand:
pen and paper, Snapchat or a typewriter. These tools also determine
30 J.W. RETTBERg
how we write, how we are able to see our own lives. Literary theorist
Paul de Man wrote of this in the late seventies, arguing that perhaps,
rather than a lived life leading to an autobiography, it is the other way
around:
We assume that life produces the autobiography as an act produces its
consequences, but can we not suggest, with equal justice, that the auto-
biographical project may itself produce and determine the life and that
whatever the writer does is in fact governed by the technical demands of
self-portraiture and thus determined, in all its aspects, by the resources of
his medium? (Man 1979, 920)
We usually think of a diary, an autobiography or a self-tracking app as an
inanimate object that may structure and mediate the way we are able to
tell our stories, but that has no stories of its own. And yet there are many
examples of people adjusting their actions so as to make them more suit-
able for mediation. For instance, a runner may postpone a run because
their phone’s battery is flat and needs charging and thus cannot track
their run. A Snapchatter may decide to go to a certain event because
they want to show themselves at that event in their next Snapchat story.
And once we see the data that our devices have collected, we may, as
Ruckenstein found, slightly alter our retelling of our day to better fit the
data that is displayed.
James Bridle, an artist and designer, has argued that the data a phone
collects are actually the phone’s diary, not the diary of the person carrying
the phone. When he learned that his iPhone had saved the coordinates of
every location he (or it) had been at, he downloaded the data and used
it to create an artistic project: a book of maps showing his whereabouts
as recorded by the phone (Bridle 2011). The title of the book, fittingly
enough, is Where the F**k Was I? because Bridle claims to have no rec-
ollection of having been at all the places the phone had registered that
he was at. Bridle’s phone, seen in this way, is hardly an inanimate object
that is only acted upon and has no agency of its own. It tells its own sto-
ries, as an independent subject. What does that mean for our relationship
with our machines?
3 APPS AS COMPANIONS: HOW QUANTIFIED … 31
deAr dIAry: dIArIes And Apps As nArrAtees
Marshall McLuhan saw media as extensions of our bodies (1964).
Perhaps he would say that our ‘dear diary’ and our step counters and
lifelogging apps are such extensions. I argue that these personal media
(Lüders 2008) are something more. They are our audiences. These are
media that we do not simply listen to or read or watch: we speak to them
(Walker 2004). We are the narrators, and they are the narratees, the
audience for our words or our data. These media (machines) may be the
only ‘readers’ of our stories and our data, or we may share the stories and
data we record in a diary or an app with others, for instance, by passing
around a paper diary or by choosing to share data with our friends or
posting it to Facebook.
In narratology, the actual, flesh-and-blood author and reader are
seen as separate from the text. But we can usually identify an implied
author and an implied reader in the text. The implied reader (or listener)
of one of Trump’s speeches is, for instance, clearly not a European who
appreciates universal healthcare, or a refugee from a war-torn country,
but such people may well be among the actual flesh-and-blood readers
or listeners. Some texts also have a narrator and a narratee, that is, an
explicit speaker in the text, somebody who speaks in the first person and
an explicit listener or an explicit addressee. The term implied reader was
coined by Wolfgang Iser (1978), but when we use these terms to think
about the way apps address their users, it’s most useful to think about
the role of the implied reader as part of a larger system, as shown in
Fig. 3.1, which shows Seymour Chatman’s model of narrative communi-
cation as it works in a novel, or even a diary (1978, 151).
In his theories of the diary, Phillippe Lejeune writes that a diary is
always written for a reader, even if that reader may simply be the writer,
at some future date (Lejeune 2008, 324). It is impossible to imagine
writing for nobody. I would argue that we think of our self-tracking apps
in the same way. We are collecting our data for our future selves, and
Implied
author
Implicit
Reader(Narrator) → (Narratee)→
Real
author → →
Real
reader
The text
→
Fig. 3.1 Chatman’s model of the narrative communication situation (redrawn
from Chatman 1978, 151)
32 J.W. RETTBERg
perhaps for others as well: to share our accomplishments with a group or
peers, perhaps. We are also usually sending our data to a corporation that
combines our data with others to generate comparisons, and that data
may be used for quite different purposes than we imagined when we slid
the Fitbit onto our wrists or installed the app on our phones. For cor-
porations, data about our exercise patterns or other daily activities have
monetary value, which Chris Till argues, transform our leisure activities
into a form of labour that can be commodified and exploited (Till 2014).
One way of making that less visible to users (or labourers, in this model)
might be to make the apps seem to be more like individual people or
even a friend, rather than presenting them as technical data collectors.
Such a devious plan is probably not necessary to make users anthropo-
morphise their devices and think of them as intimate companions rather
than the agents of corporations that surveil us. Individual users rarely
see the full scale of data collection. For a user, the relationship is mostly
experienced as being between the user and the device.
This is not simply about the intimacy of a wearable device or a smart-
phone. Diary-writers have also long anthropomorphised their diaries,
imagining a ‘you’, a reader that the writer is writing for. One may well
argue that this ‘you’ is a requirement of language itself. Speech is founded
upon conversation or at least upon an audience. In diary-writing, we
often address our words to a ‘dear diary’, imagining the diary itself to be
a safe, silent listener.
Here is an example of how ‘dear diary’ is used in a serial magazine
story written in 1866. Note that this is from a fictional diary, so the use
of ‘dear diary’ may be slightly parodic, or at least intended to capture a
certain type of personality in the fictional diary-writer:
March 2nd.–Now, my diary, let me tell you all about today. You are the
only bosom-friend I have, dear diary, and you keep all my secrets, that is,
you would keep them if I had any to confide in you. (Worboise 1866, 16).
Do we still imagine a ‘dear diary’ when we open our self-tracking apps
on our phones? Do we imagine our machines as audiences? Or as sub-
jects in their own rights?
‘Dear diary’ is a direct address of a narratee, giving the diary itself
a human subjectivity. Based on a search of google Books’ corpus of
digitised, published books,1 we can see that the expression ‘dear diary’
began to be used in print in the mid-eighteenth century, but became
3 APPS AS COMPANIONS: HOW QUANTIFIED … 33
really popular in the last decades of the twentieth century. Interestingly,
both the phrase ‘dear diary’ and the word ‘diary’ were used markedly less
in print after the turn of the twenty-first century, which seems very likely
to be connected to Internet use (see Fig. 3.2).
Perhaps we don’t need to anthropomorphise our diaries anymore now
that we have the Internet, with real people as potential readers of our
blog posts and Facebook updates. Although there are clearly many sim-
ilarities between traditional diaries and the way people share stories of
their daily lives in social media (Rettberg 2014a), there has been a transi-
tion from sites like OpenDiary.com, that very explicitly used diary con-
ventions to structure the users’ writings, to platforms like Snapchat and
Tumblr that don’t reference traditional diary conventions at all (Martinviita
2016; Rettberg 2017, forthcoming). For the purpose of this chapter,
though, what I am interested in is the way that diarists have anthropomor-
phised their diaries, for instance, by writing to their ‘Dear Diary’.
confessIng secrets to A dIAry or App
Both diaries and self-tracking balance between the private and the public.
Today, the privacy of a personal diary is often seen as its defining fea-
ture. Diaries are sold with padlocks and keys and used as confessional
Fig. 3.2 google Books Ngram Viewer chart showing the occurrence of the
phrase ‘dear diary’ (with different capitalisation) in books published between
1800 and 2000 that have been digitised by google. Chart generated 01.06.2016
34 J.W. RETTBERg
spaces where it is safe to pour out all one’s secrets. Historically in
Western culture, the diary was sometimes quite explicitly seen as a way
to confess sins directly to god (Heehs 2013, 49), but also as a tool for
spiritual self-improvement. Sixteenth century Jesuits had explicit guide-
lines for writing spiritual narratives about themselves (Molina 2008),
and other sixteenth- and seventeenth-century guides exist that empha-
sise both self-abasement before god and recording mercies, grace and
deliverances (Rettberg 2014b, 5–7). Some of the spiritual work in this
self-narration took place when diary-writers shared and discussed their
diaries with friends or with the congregation. So, although there is a
strong history of private diaries, where the author would be horrified if
others read her diary, there is also a strong parallel tradition of diaries
that were expected to be shared with others and that were specifically
intended as self-improvement tools (Humphreys et al. 2013). This latter
kind of diary obviously has something in common with the Quantified
Self (QS) movement’s drive towards self-improvement. There are many
examples of self-improvement projects that combine self-representation
with more quantifiable kinds of self-tracking. For instance, the app You
(you-app.com) gives users daily tasks to complete and asks them to doc-
ument each task by taking photographs and writing short comments,
which can be shared with friends or kept private. Taken together, these
photographs and comments become a kind of diary. gratitude projects
such as #gratitude365 are another example. Here, participants aim to
share daily photographs of something they are grateful for, with a shared
hashtag that creates a flexible sense of community as well as allowing
individual users to organise their own contributions. Keeping a record
of what you are grateful for is an old technique for self-improvement,
recommended, for instance, in John Beadles’ A Journal or Diary of a
Thankful Christian (Beadle 1656; Rettberg 2014b, 5–6).
Interestingly, QS has a similar tension between the private and the
public as diaries do. The Show and Tell meetings that are common at QS
events and on the QS blog are very explicitly about sharing, and as with
many shared diaries, the purpose is self-improvement. Yet there is also a
strong sense that people find over-sharing to be rude. Complaints about
Facebook friends who post every map of their run or every song they
hear on Spotify to their Facebook timeline are common. We also need to
recognise that some of the drive to share one’s personal data is driven not
by the individual users, but by the corporations that develop the services
(Ajana 2017; Till 2014).
3 APPS AS COMPANIONS: HOW QUANTIFIED … 35
Apps As compAnIons And Independent subjects
Paper diaries and many Quantified Self apps are silent listeners, existing
only as receptacles for our data. Their interfaces are often designed to
appear objective and serious, as shown in the screenshots in Fig. 3.3.
But some apps are programmed to appear as characters, as subjects of
their own. For instance, the activity tracker Lark is designed to look like a
messaging app with a conversational agent or chatbot sending messages to
the user: ‘Hey there, hope you’re having a fine morning’. Lark uses con-
versations instead of graphs to tell me about my activity level: ‘Awesome
job. Averaging 1 hour 31 minutes of activity last week. That’s great!’
Lark doesn’t usually allow the user to write back in natural language.
Instead, it usually offers a few different responses to its questions that the
user can choose between. There’s only one button offered as a possible
response to the comment about last week’s activity: ‘Okay’. When I click
it, a new message appears. ‘Nice job walking for 23 minutes in the early
afternoon last Tuesday’, Lark praises me. ‘That was a long one!’ The
only option in this chat is to click the prescripted response: ‘Oh yeah!’
Independently conscious technology is a common topic in science fic-
tion, usually thematising the uneasy balance between the machine as a
benevolent assistant and the machine as a too-powerful threat. Asimov’s
Fig. 3.3 From left to right: iPhone Health app, Reporter, Withings
36 J.W. RETTBERg
laws of robotics are intended to solve this problem by programming loyalty
to humans into the operating system of an artificial intelligence (AI). Of
course, even a rule programmed in 1 s and 0 s can be interpreted in differ-
ent ways, and so the system backfires when the AI realises that humans are
harming themselves by destroying the environment, and so the AI decides
to control humanity to protect us from ourselves (Asimov 1950).
This fear of machines is far older than AI. The Luddites famously
rebelled against the machines that were replacing human workers by
destroying mechanical knitting machines, looms and other factory
machinery. This basic fear of humans being made redundant and simply
unnecessary by more perfect and efficient machines has carried through
to our time, though today it is often expressed as a fear of machines tak-
ing over our world completely, as in the Matrix film trilogy. In popu-
lar fiction, machines are often shown as untrustworthy or even evil, as
we see in Obi Wan Kenobi’s warning to Luke about Darth Vader: ‘He’s
more machine than man now. Twisted and evil’ (episode 6, after Yoda’s
death). Often science fiction shows a transition from benevolent to
malevolent machines, which is often accompanied by a shift from the
machine having a human-like consciousness to being incomprehensible
to humans. We have an urge to create robots that are made in our image,
but we also find robots that are too similar to humans to be uncanny.
Andrea guzman writes that Siri, the conversational agent built into
iPhones, is deliberately scripted to seem non-threatening, so we don’t asso-
ciate her with ‘the cultural image of malicious machine’ (guzman 2017).
The idea of computers as a benevolent companion species is also
deeply ingrained in computer science and the development of technol-
ogy. Alan Turing’s question of how we would be able to tell the differ-
ence between a computer and a human was taken up as the Turing Test,
a challenge spanning decades to create software that a human cannot
recognise as non-human. The most successful chatbots in this sense of
‘seeming human’ have had very clearly defined topics or modes of con-
versation. Joseph Weizenbaum’s ELIZA, one of the first truly famous
chatbots, was modelled as a Rogerian psychologist, who listens to the
human interactant and repeats words back to her to encourage her to
speak more (Weizenbaum 1966). Here is an often-cited example of a
conversation with ELIZA:
young woman: Men are all alike.
eliza: In what way?
3 APPS AS COMPANIONS: HOW QUANTIFIED … 37
young woman: They’re always bugging us about something specific
or other.
eliza: Can you think of a specific example?
young woman: Well, my boyfriend made me come here.
eliza: Your boyfriend made you come here?
young woman: He says I’m depressed much of the time.
eliza: I am sorry to hear you are depressed.
young woman: It’s true. I’m unhappy.
eliza: Do you think coming here will help you not to be
unhappy?
young woman: I need some help, that much seems certain.
eliza: What would it mean to you if you got some help?
young woman: Perhaps I could learn to get along with my mother.
eliza: Tell me more about your family.
young woman: My mother takes care of me.
eliza: Who else in your family takes care of you?
young woman: My father.
Some apps play upon this role of companion. Lark is one example.
Another is the customised music and podcast app Capsule.fm, which lets
you choose between seven AI personalities as your host, and each will
speak to you by name. The description of the app on the iTunes app
store emphasises the subjectivity of the app: ‘Capsule.fm is run by lov-
ing machines, mixing music, social media updates, news and podcasts
into the perfect soundtrack for where you are, and what you are doing.
Capsule.fm knows you better than your friends, and gets smarter the
more you listen’.
Apps like Lark and Capsule emphasise technology as friendly in order
to gain our confidence. Lark is not particularly self-referential, and the
scripting of its conversations does not present the app as though it is
aware of being a program rather than a human being. Similarly, it does
not speak as though the user is aware that it is a program.
The robot voices of Capsule.fm, on the other hand, are very explicit
about their robot nature and use humour to play with the idea of their
having full-fledged personalities. Capsule.fm’s robot voices are loving. A
sample from the website includes the following words, spoken in a soft,
female, computer-generated voice:
Confession time: I have a little crush on you, Sarah. Ever since you down-
loaded me, I have this special feeling towards you.
38 J.W. RETTBERg
The robot hosts of Capsule.fm are like radio DJs. They introduce and
play music from your phone and your Spotify playlists, read news head-
lines and suggest podcasts other users listen to. Most of what they say is
typical patter. They joke and make general observations, then read the
title of the song that’s up next. Most of the hosts’ speech is pre-written
by the human developers, although variables are slotted in: the user’s
name, or an adaptation of her name, as when my host addressed me as
Jilly Bear rather than Jill. A recurring feature of the jokes is that they
comment quite explicitly on the ontological status of the hosts, either
speaking in the first person and expressing feelings, as here:
Hi, Jilly Bear. I want to thank you again for listening to Capsule.fm. I
really appreciate it. (Capsule.fm app, 30.05.2016)
Or, the jokes play upon the user’s full knowledge that the host is not in
fact a real human, but lives in a phone:
Now, go disinfect your fingers before you touch me anymore on your
iPhone. (Capsule.fm app, 30.05.2016)
Positioning the device or app as a companion makes its difference from
us explicit. Our devices are not human, not our selves. And yet they have
agency, or at least, we imbue them with agency and subjectivity.
Vi, billed on its website as ‘the AI personal trainer who lives in bio-
sensing earphones’ takes the anthropomorphism of a self-tracking device
a step further, presenting Vi as ‘a friend’ who ‘will help you’. The prod-
uct website getvi.com gushes:
‘Put Vi on and start a relationship with a friend for your fitness. Each day,
Vi tracks you, gets smarter, and coaches you to real results. Vi will help
you meet your weight goals and improve your training’.
Vi’s voice speaks into your ears from earphones, so nobody else can hear.
Her voice is a soft voice, with an appealing, supportive sense of joy. It is
not robotic: each phrase and word were recorded by a human female and
they are recombined algorithmically to fit each situation. The earphones
track the user’s motion and heart rate, and the user speaks to interact
with the device and to share information.
The promotional examples of interactions between Vi and users that
are shown on the website show that Vi is designed to show empathy. In
one video, showing a man running uphill on a wooded trail with the Vi
3 APPS AS COMPANIONS: HOW QUANTIFIED … 39
earphones on, Vi uses information about the user’s heart rate and speed
to suggest that he slows down. Then, she praises him for his effort:
Vi: Looks like you’re fatigued. Are your legs done?
Runner: Yeah… I’m done.
Vi: Okay, stop here. Keep walking to gradually slow your heart
rate down.
Vi: Amazing effort today!
conclusIon: speAkIng wIth mAchInes
Diaries have long been anthropomorphised. We address them directly
when we share our secrets with them. The use of conversational agents
in self-tracking apps and devices such as Lark and Vi suggests that we are
moving towards a similar relationship with our devices, where we nar-
rate our experience to the device, and it speaks back to us, establishing
a relationship between human and technology that emphasises a shared
agency, a collaboration rather than the traditional notion of humans
using their technologies as tools they are in control of. By allowing our
devices to be our coaches, they become more than mere extensions of
humans, they are becoming our equals.
Ted Nelson wrote in Dream Machines, his 1974 self-published and
extremely influential vision of computers: ‘the computer is a Rorschach,
and you make of it some wild reflection of what you are yourself ’
(Nelson 1974, DM3). ‘Identifying with machines is a crucial cultural
theme in American society, an available theme for all of us,’ he wrote
in another entry in Computer Lib, the book printed on the flip side of
Dream Machines.
Is that what we do, when we speak with our devices, when we allow
them to store our data and to show us images of ourselves? By allowing
us to address them as people, by allowing us to anthropomorphise our
technology, perhaps we are being eased into a new kind of relationship
with our technology. Writing a diary was a way of sharing agency with a
simple form of technology. Using a self-tracking device to generate visu-
alisations of our bodily data produces a different kind of narratives, with
a different kind of shared agency. In future research, we should explore
this shared agency. Theoretical work from posthumanism may be valu-
able in teasing this apart (Hayles 1999; Braidotti 2013; Nayar 2014).
It is also important to consider the long history of humans speaking to
40 J.W. RETTBERg
and sharing secrets with technology, from self-tracking to diaries and
beyond.
note
1. google Books had digitised 25 million books by 2015 (Heyman 2015),
and their ngram search permits comparing the frequency of specific words
or phrases across the corpus: https://books.google.com/ngrams. The cor-
pus has been criticised for having metadata errors and may not be a repre-
sentative selection of books, but the sheer volume of material clearly allows
some interesting comparisons to be made (Michel et al. 2011).
references
Ajana, Btihaj. 2017. Digital Health and the Biopolitics of the Quantified Self.
Digital Health 3 (January): 1–18. doi:10.1177/2055207616689509.
Asimov, Isaac. 1950. I, Robot. New York: Doubleday.
Beadle, John. 1656. The Journal or Diary of a Thankful Christian: Presented in
Some Meditations Upon Numb. 33.2. London: E. Cotes for Tho. Parkhurst.
https://archive.org/details/journalor00bead.
Braidotti, Rosi. 2013. The Posthuman. Cambridge: Polity.
Bridle, James. 2011. Where the F**k Was I? (A Book). BookTwo.org. June 24.
http://booktwo.org/notebook/where-the-f-k-was-i/.
Chatman, Seymour. 1978. Story and Discourse. Narrative Structure in Fiction
and Film. New York: Cornell UP.
Dijck, Jose van. 2014. Datafication, Dataism and Dataveillance: Big Data
between Scientific Paradigm and Ideology. Surveillance and Society 12 (2):
197–208.
guzman, Andrea L. 2017. Making AI Safe for Humans: A Conversation With
Siri. In Socialbots: Digital Media and the Automation of Sociality, edited by
Robert gehl and Maria Bakardjieva. Routledge.
Hayles, N.Katherine. 1999. How We Became Posthuman. Chicago: University of
Chicago Press.
Heehs, Peter. 2013. Writing the Self: Diaries, Memoirs, and the History of the Self.
New York: Bloomsbury.
Heyman, Stephen. 2015. google Books: A Complex and Controversial
Experiment. The New York Times, October 28. https://www.nytimes.
com/2015/10/29/arts/international/google-books-a-complex-and-contro-
versial-experiment.html.
https://books.google.com/ngrams
http://dx.doi.org/10.1177/2055207616689509
https://archive.org/details/journalor00bead
https://www.nytimes.com/2015/10/29/arts/international/google-books-a-complex-and-controversial-experiment.html
https://www.nytimes.com/2015/10/29/arts/international/google-books-a-complex-and-controversial-experiment.html
https://www.nytimes.com/2015/10/29/arts/international/google-books-a-complex-and-controversial-experiment.html
3 APPS AS COMPANIONS: HOW QUANTIFIED … 41
Humphreys, Lee, Phillipa gill, Balachander Krishnamurthy, and Elizabeth
Newbury. 2013. Historicizing New Media: A Content Analysis of Twitter.
Journal of Communication 63 (3): 413–431. doi:10.1111/jcom.12030.
Iser, Wolfgang. 1978. The Implied Reader. Baltimore: Johns Hopkins University
Press.
Lejeune, Philippe. 2008. On Diary, trans. Katherine Durnin. Manoa: University
of Hawaii Press.
Lüders, Marika. 2008. Conceptualizing Personal Media. New Media and Society
10 (5): 683–702. doi:10.1177/1461444808094352.
Man, Paul de. 1979. Autobiography as De-Facement. MLN 94 (5): 919–930.
doi:10.2307/2906560.
Martinviita, Annamari. 2016. Online Community and the Personal Diary:
Writing to Connect at Open Diary. Computers in Human Behavior 63
(October): 672–682. doi:10.1016/j.chb.2016.05.089.
McLuhan, Marshall. 1964. Understanding Media: The Extension of Man. New
York: Mcgraw-Hill.
Michel, Jean-Baptiste, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres,
Matthew K. gray, The google Books Team, Joseph P. Pickett, et al. 2011.
Quantitative Analysis of Culture Using Millions of Digitized Books. Science
331 (6014): 6176–6182. doi:10.1126/science.1199644.
Molina, J.Michelle. 2008. Technologies of the Self: The Letters of Eighteenth-
Century Mexican Jesuit Spiritual Daughters. History of Religions 47 (4):
282–303. doi:10.1086/589802.
Nayar, Pramod. 2014. Posthumanism. Cambridge: Polity Press.
Nelson, Theodore. 1974. Computer Lib / Dream Machines. Self-published.
Rettberg, Jill Walker. 2014a. Blogging, 2nd ed. Cambridge: Polity Press.
———. 2014b. Seeing Ourselves Through Technology: How We Use Selfies, Blogs
and Wearable Devices to See and Shape Ourselves. Basingbroke: Palgrave.
———. 2017. Online Diaries and Blogs. In The Diary, ed. Batsheva Ben-Amos,
and Dan Ben-Amos. Bloomington: Indiana University Press.
———. forthcoming. Snapchat. In Appified, ed. Jeremy Wade Morris, and Sarah
Murray. Ann Arbor: University of Michigan Press.
Ruckenstein, Minna. 2014. Visualized and Interacted Life: Personal Analytics
and Engagements with Data Doubles. Societies 4 (1): 68–84. doi:10.3390/
soc4010068.
Till, Chris. 2014. Exercise as Labour: Quantified Self and the Transformation of
Exercise into Labour. Societies 4 (3): 446–462. doi:10.3390/soc4030446.
Walker, Jill. 2004. How I Was Played by Online Caroline. In First Person: New
Media as Story, Performance, and Game, ed. Noah Fruin, and Pat Harrigan.
Cambridge: MIT Press.
http://dx.doi.org/10.1111/jcom.12030
http://dx.doi.org/10.1177/1461444808094352
http://dx.doi.org/10.2307/2906560
http://dx.doi.org/10.1016/j.chb.2016.05.089
http://dx.doi.org/10.1126/science.1199644
http://dx.doi.org/10.1086/589802
http://dx.doi.org/10.3390/soc4010068
http://dx.doi.org/10.3390/soc4010068
http://dx.doi.org/10.3390/soc4030446
42 J.W. RETTBERg
Weizenbaum, Joseph. 1966. ELIZA: A Computer Program for the Study
of Natural Language Communication between Man and Machine.
Communications of the ACM 9 (January). http://i5.nyu.edu/~mm64/
x52.9265/january1966.html.
Worboise, Emma Jane. 1866. The Fortunes of Cyril Denham, Part 1. The
Christian World Magazine, January.
http://i5.nyu.edu/%7emm64/x52.9265/january1966.html
http://i5.nyu.edu/%7emm64/x52.9265/january1966.html
4 I THE DATA REVOLUTION
revolution’ is underway- referring not only to the growing value o f data and how they are
reshaping society, but also to the nature and production o f data.
While there are thousands of articles and books devoted to the philosophy, politics
and praxis of information and knowledge, it is only in the last decade that there has
been sustained critical reflection on the nature of data, their production and use. In the
past, when attention was paid to data it was usually to consider in a largely technical
sense how they should be generated and analysed, or how they could be leveraged into
insights and value. Little consideration was given to the nature of data conceptually,
philosophically and politically, or their contextual and contingent production, circula-
tion, usage and effects across all aspects of daily life. The principal aim o f this book is to
consider data and the data revolution from a critical perspective: to examine the nature,
production and politics of data and how best to make sense o f them, their uses and con-
sequences. To supply an initial conceptual platform, this chapter examines the forms and
nature o f data.
W h a t a r e data?
The Oxford English Dictio11ary defines data:
1. As a count noun: an item of information; a datum; a set of data.
2. As a mass noun.
a. Related items of (chiefly numerical) information considered collectively. typically
obtained by scientific work and used for reference, analysis, or calculation.
b. Computing. Quantities, characters, or symbols on which operations are performed by
a computer, considered collectively. Also (in non-technical contexts): information in
digital form.
This definition reveals data to be representative pieces of information about phenomena
and the input for (and output from) computational processes. Data reflect some aspect of
the world (e.g. a person’s age, height, weight, colour, blood pressure, opinion, habits, loca-
tion, etc.) or the results of an experiment (a controlled condition for determining some-
thing about phenomena) captured through some form of observation or measurement ( e.g.
a scientific instrument, sensor, camera, survey, etc.). They can also be derived in nature
(e.g. data that are produced from other data, such as percentage change over time calcu-
lated by comparing data from two dates), generated indirectly as the exhaust of another
process (e.g. a database of social media posts), and produced through inference, prediction
and simulation. Data can take a number of forms – numbers, characters, symbols, images,
sounds, electromagnetic waves, bits – and be recorded and stored in analogue or digital
form. Good-quality data are discrete and intelligible (each datum is individual, separate
and separable, and clearly defined), aggregative (can be built into sets), have associated
metadata (data about data), and can be linked to other datasets to proVide insights not
available from a single dataset (Rosenberg 2013).
S. mn!ogiczlly the n-md m::
– = – . ” < : . data are elements tml:G:
=i::::,,.’c, er, data refer m tim:
I _ 1 ·,-‘f llm, e::cpetimentsan:if
:::.,-¼ S·-“‘1’.i <'S data are ··.:::cs=:::n:tsof data th2t hai'2: b : 7 - - .m-d Dodge 2011. A s !
d .; cnm msJ 1!0
–.:± ‘t?e..&. ..-m;ch i”.._as –: ·
-:::=r- er sa -rl -&cm r:s::.=
:: -_. -:.:mg, tllis 00,,S 5i:
. · _ , . . !:.:s ::eoome so = -· 4 m h e r f i r m a
& : E ‘i\””!!i:re . . . .
: : : ; ‘De = c t s c i me t a m , m
• · ” ” – – – q ; h m-eas:x.
– ·-.- –;.. _ ••ml n c llZ\””cd::? = ::-=-,,,;.- h a . e ._,.._, rust used i..-i f r= and the de-.cltJ¥:I evidence than { . • «. s . analysis (such 2 S : : _ . . . . : e : g 2013). And t h ….!. >_,_ d their meant O b i .:::..:. cmill d.i I 16 THE DATA REVOLUTiO:”.;
from a sensor, choices have to be made Rgatding the sensor specification and quality, its From this critical perspective, scientific knowledge is understood as being ‘produced, Further, data are not immutable, wedded to a particular form or unchanging over time ::::s. J ? n is t:e:.} and phY2G!l mn:::i ::::=· ;;-=—–,.,:mo ;,ndJ 1l!J fnfrastructmesm:. ;:::..r- ::r aai 2D1Sb). – z a ? £ ‘ s ami mte.p dzll Z . 7 _ c : • ” t O! action 2!!ii “”” man mmms ! :. – g:::es::nZJ!& a:.tl. .:: = e z …. . ·· :::ctfs y ,::-,-,•–::— m E p5 hat’s just a guideline of certain things that need to added. But really it just needs to analyze certain data obviously in comparison to what a 23-year-old woman would have on their computer and what I added about Facebook. The Analysis A case study analysis is not just a summary of the case. It should identify key issues and problems, outline and assess alternative courses of action, and draw appropriate conclusions. The case study analysis can be broken down into the following steps: Identify the most important facts surrounding the case. Identify the key issue or issues. Specify alternative courses of action. Evaluate each course of action. Recommend the best course of action. Identify the most important facts – You may need to read the information about the case a couple of times. Typically, cases contain quite a bit of information. Accompanying tables and figures often contain important information that is not in the narrative. Some details are more important than others, and you typically can assume that while the facts are true, the statements and decisions made by the individuals in the case might be questionable. If key information is not available, you may need to make assumptions, but be sure your assumptions are reasonable for the situation. The appropriateness of your conclusions likely depend on the assumptions you make. Identify the key issue(s) – Use the facts to identify two to five key issues in the case. Often, multiple issues or problems are present, but determine which are important and which are trivial and focus on the important one(s). Summarize each issue in one or two sentences, and describe how this problem is relevant. This step of summarizing in a sentence or two will help focus your analysis. Problems in a case come from a variety of areas – but keep in mind that our focus is on the ethical implications of the case. State alternative courses of action – List the possible alternatives that can be (or could have been) taken to address these problems. You might suggest that certain permissions be obtained or that certain data protection measures could be undertaken. Be sure to consider what changes would be required to implement these changes. Keep in mind that there are limits on what can practically be done—some solutions may be difficult to implement, so you should identify if these factors might limit the ability to implement a particular alternative. Evaluate each course of action – Given the information that is available, identify the strengths and weaknesses of each alternative. Identify the likely outcomes, and again evaluate whether this course of action is feasible to undertake. Make a recommendation – Select one of your possible options, and make a list of reasons why you made this recommendation. Your final recommendation should flow logically from the rest of your case analysis and should identify any assumptions that you used to shape your conclusion. There is often no single “right” answer, and each option is likely to have risks as well as rewards. The Write-Up Writing up your case study analysis tracks the process you just completed, and should include an introduction, background on the issues, possible alternatives, your proposed solution, and recommendations for how the organization might carry the solution out. Introduction – The introduction should be brief, one to two sentences. You should identify the key issue in the case and summarize your recommendation. Include details such as the organizations and the project or individuals the case concerns, as relevant. Your introduction should include a thesis statement, stating the proposed solution to the problem you have highlighted. Background – In this section, you should summarize the key facts and most important issue(s) of the case. You should provide sufficient background information for the reader to understand the issue, but this section should be relatively brief—no more than two paragraphs. Framework – This section should identify and highlight the conceptual framework (contextual integrity, Menlo Report, feminist ethics, etc.) that you will use as support to identify the problem and that your proposed solution is an effective practice. Make sure to define the appropriate concepts that you will discuss, using the readings from class Key Points – In this section, you should outline three to five key points in this case. Be sure to link these points to the concepts of your framework. Note that it is not sufficient to simply state facts from the case. You will need to support your assessment with evidence (in the form of citations) from theories, experts, or examples that we have discussed or read about in class. Alternative courses of action – In this section, you should briefly describe possible alternative course of action, and the strengths and weaknesses of each. Proposed solution – In this section identify one specific and realistic solution. Be sure to both present your solution and to also present support as to why your solution would be effective and appropriate. Explain your decision with solid evidence (citations!). Be sure to include a thorough description of the requirements for implementing your solution, any specific strategies that will be needed to accomplish it, and specify whether than are any constraints or reasons this solution is not possible at this time. What needs to be done, and who should do it? Conclusion – The conclusion is where you will re-state the main points for your reader. Why is this case important or significant? What can we learn from this? Be sure to include enough detail in the sections related to Key Points, Proposed Solutions and Conclusions. These sections demonstrate your learning and analytical skills. Again, I want to emphasize that you should use concepts from the course readings, class discussion, and your notes in your writing. You may need to do a little outside research to support your decision; if so, be sure to cite your research and include your sources in the references list.
-“‘”t c;;:iwla:rs h
– e : g 2013). In pa,.”1fco
: : ::i:::a…”C..ie!ll:aJt and £ e 0 0 • ..;
.::::x. cse o f formalised p::-
❗ yticaJ an.d
t l y o f the m
– analysis or inteLJ
·r • : ;,-;e meaningful m:
– “‘.::: es.p:,es.xtl i n any
.:::::i. c:sc o f mrletly. ro:,
– _ — =i:.-r::.a,-alauthn t:,
::::z;: 1998; Gar.:.:
-===· ·_::.:::”‘ r e z n i n g o f dat::
I
calibration, its siting, its sampling rate. how the data are recorded and analysed, how to
treat errors and gaps, and so on. These choices are made and framed within an operational
context, shaped by prevalent knowledge, established practices, existing systems, cultural
lenses and intended uses (Bell 2015; Loukissas 2018). Moreover, interpreting those sensor
data meaningfully ‘requires an understanding of the instrument – for example, what do the
sensors detect, under what conditions, at what frequency of observation, and with what
type of calibration?’ (Borgman 2007: 183). Data then do not simply represent aspects of the
world; they are partial constructions about the world (Desrosieres 1998; Poovey 1998). Or,
as Borgman (2015: 17) puts it, ‘[d]ata are neither truth or reality’, though they are used to
assert truth and reality.
rather than innocently “discovered'” (Gitelman and Jackson 2013: 4), its supposed neu-
trality and objectivity a discursive fiction (Ribes and Jackson 2013: 165). Instead, how
data are ontologically defined and delimited is cast not as a value-free, technical process,
but a normative, ideological and ethical one that has consequence for subsequent analy-
sis, interpretation and action (Bowker and Star 1999; Reigeluth 2014; Markham 2017a).
The production of data is a social practice, conducted tluough structured and structuring
fields (e.g. methods, concepts, expertise, institutions) that are shaped by and contribute
to configurations of power and knowledge (Ruppert et al. 2017). At the same time, data
are also open to ‘the unplanned, unexpected, and accidental’, moulded by happenstance
(BoeUstorff 2013), as well as guesswork, hunches, wrangling, and compromise (Neff et al.
2017). Indeed, data work (e.g. collecting, processing, analysing) rarely operates smoothly
and tweaks, badges and repairs to achieve a working outcome are the norm not the excep-
tion (Pink et al. 2018b). As Gitelman and Jackson (2013: 2) put it: ‘raw data is an oxymoron’;
‘data are always already “cooked” and never entirely “raw'”. Moreover, data do not follow
a preordained recipe (Boellstorff 2013) but are ‘lively’ in their cooking and consumption
(Lupton 2016), and contain noise, errors, biases and gaps. Likewise, a dataset might not be
uniform or consistent, consisting of an amalgam of data produced from varying sources
(Tanweer et al. 2016).
and circumstance, rather they need to be maintained and can take shift in character across
media, platforms and use (Leonelli et al. 2017). As Markham (2013) observes, part of the
issue is that data are understood as things that can be harvested, rather than as a pro-•
cess that is continually in the process of taking place. In this sense, data are generally
approached as antic (discrete, fixed objects) rather than ontogenetic and emergent (always
ir1 a state of becoming) (I
_ : _ – = .:-,rn c;oc?ally pro:ma:rl,. c :
:-:-,,.: .ct t l 2:016). D a g c
– :::r … , ; s ,of a “‘
– at. 20!6}. P-.Elij-c=.121::
.::::: :::_-=-:!ii:ing er ccletro:a ‘1:
-i=c;:s:::mas tned..””CZ_:
s . c , rrr:,
=ewcr:.W. but ti! 2 5 (
.:..:::.a::::::b:: ,E:::J irx::::rn:m-” ts:mm z
– h , e l p t i :
=- httrai = .. ,fW..S..= fm pe
: …. ‘”7–.a.:s a r t reil ;
:::.:._–:c .. ruFn
– · ·’!’::f’:!: X-3-‘- iilre TI:.y f::.:t
– – – – s n me.=i·
=- t:dm:ral SJS’!’e
.::z .=::.:: : : i fn
.. :::-· -: ->;:,”(IR!:, z,:;::
…=:.-:- ..,..-PS[;,,,. zcra:ssz.1
. .:-= –,….::::; mme
.=: •: •=- z-,,i
·.-:.:.=::::. =C•-e&:efre-ct:so
· . .:.:..::c.: ‘::’ ,!o:ct!l:;
-::..: = : : = – m o o . : : : c ; ;
. ·.:::..;;: =: – ::”«! B,=,.]) 1_i2lf.
.’. – , . . . . . -‘”-!’S !”‘?–fn;
_ -;:.::;- -=r:!m ffi; 1mw
p6
p7
p9
p11
p12
p13
p14
p15