Part 1: Quoting
Required source: A professional journal article from the list presented in the Library section of the classroom as explained above. Do not look for quotes already presented in the article; your mission is to find direct statements in the article and quote them yourself.
Quotation 1: Parenthetical citation
- Choose a meaningful statement of 25–39 words from the article and quote it without introduction, using in-text citation after the end-quotation mark and before the final sentence punctuation.
Quotation 2: Narrative citation
- Choose a different meaningful statement of 25–39 words from the same article and quote it properly, starting your sentence with “According to” or a similar introduction, and inserting proper citation as explained in the reading.
Required adjustment:
- Edit just one of your two quotes by correctly using brackets, an ellipsis, or [sic]. These techniques are explained in the reading.
- If the original does not have an error, you cannot use [sic] and must instead employ either brackets for a clarification or an ellipsis to delete words. Note that British English spellings are not considered errors.
Reference entry:
- Provide a full 7th edition APA-standard reference entry for this journal article.
Part 2: Paraphrasing From Two Other Articles
Choose two other journal articles from the same Library list. It is recommended that you pick articles that are relatively easy for you to understand, especially if you are new to the technology field. Find a section of each article that interests you and write paraphrases.
For each of your two paraphrases, separately:
- Compose a descriptive title (a phrase) in your own words. Use title case.
- Write a paraphrase of 170–220 words. If it is difficult to meet the minimum length or to avoid writing more than the maximum, then a more suitable section (or section size) from the original article must be chosen.
- Do not include any quotes.
- Write the paraphrases in paragraph form (no lists).
- Include proper citation as explained in the reading.
- Provide a full 7th edition APA-standard reference entry.
©
2
0
21
M
A
H
ea
lt
h
ca
re
L
td
PROFESSIONAL
110 British Journal of Community Nursing March 2021 Vol 26, No 3
Impact of technology on
community nursing during
the pandemic
Kathryn Rose Grindle
Advanced Nurse Practitioner, Liverpool John Moores University
katgrindle@hotmail.com
The COVID-19 pandemic has been described by Vannabouathong et al (2020) as the largest and most rapidly spreading threat to global health since the
Spanish flu of 1918, which is estimated to have killed over
50 million people. Fauci et al (2020) reported the disease as a
global, life-threatening viral infection affecting the respiratory,
gastrointestinal and neurological systems. There is minimal
literature that can specifically define all characteristics of
the virus, as its epidemiological characterisation remains in
the nascent stage. In a study discussing genetic mutation,
Grubaugh et al (2020) described the SARS-CoV-2 virus as
causing ever-changing infection, and they further suggested
there is no cure. In their recent article for Lancet Infectious
Diseases, Baud et al (2020) reported that the global mortality
rates from March to May 2020 increased by 5.7% solely due
to the virus.
International evidence presented by Comas-Herrera et
al (2020) and McMichael et al (2020) showed that patients
residing in care homes are a particularly vulnerable group
for severe COVID-19 infections, due to the nature of them
having multiple underlying chronic and long-term conditions
resulting in them requiring 24-hour care. At the height of
the pandemic in the weeks of 1–17 April 2020, Field et al
(2020) reported that the death rate in UK care homes had
risen by 500% due to COVID-19. This statistic is of utmost
importance in understanding the required rapid rollout of
new assessment processes for the vulnerable. Rekatsina et al
(2020) described key factors shown to have affected some of
the most vulnerable members of society during the COVID-
19 pandemic, including the severity of the virus and extremely
high rates of mortality, coupled with the large volume of staff
sickness resulting in care homes becoming overwhelmed and
unable to provide care for their residents.
Recommendations made in a study relating to the reduction
of patient and staff exposure to viruses via the use of telephone
assessment strongly suggested the alternative use of telephone
or virtual consultation for assessment (Milusheva, 2020).
This has also been recommended by Shehata et al (2020),
who described the emphasis clinical commissioning groups
(CCGs) across the UK have placed on the importance of
other mediums of assessment in the current climate. The use
of other assessment mediums are evidenced as dating back as
early as 1876. Pierce et al (2020) described the introduction
of the telephone as being revolutionary within healthcare.
The first incidence of it being used to seek medical attention
was by the inventor himself, Alexander Graham Bell, when he
used it to seek help after spilling sulphuric acid on himself; by
1970, enthusiasts described the telephone as being as much a
part of the standard equipment for a clinician as a stethoscope.
Greenhalgh et al (2020) stated that telephone triage and
assessment have now become the first line in the provision
of healthcare in the community in the wake of the COVID-
19 pandemic. Several groups emphasised the importance of
using telemedicine to limit exposure and alleviate the burden
placed on healthcare systems by the COVID-19 pandemic
(Eurosurveillance Editorial Team, 2020; Reeves et al, 2020;
Zhou et al, 2020). These were further supported by Smith
et al (2020), who implored the prioritisation of moving all
patient-facing assessments to triage via a telephone or video
consultation to limit unnecessary exposure of staff and patients
to the virus. Duffy and Lee (2018) went further in suggesting
that in-person visits should be the second or third option
in reducing the exposure and potential spread of infectious
ABSTRACT
The purpose of this article is critical analysis, reflection and discussion
in regards to the uses and impacts technology has had in community
settings, specifically care homes during the COVID-19 pandemic. This will
be investigated and supported with special emphasis on virtual assessment
platforms and their use within the care homes settings, furthermore
reviewing specific data collected in relation to the usage within community
care homes. The article will outline the positive attributes and critically
reflect upon the benefits of using audio and video conferencing when
assessing patients and the beneficial impacts this has had on patients and
the wider health community. While conversely addressing the obstacles and
threats faced by clinicians in the use of assessment software.
KEY WORDS
w Community w Care homes w Attend Anywhere w Infection risk
w Emergency
©
2
0
21
M
A
H
ea
lt
h
ca
re
L
td
PROFESSIONAL
British Journal of Community Nursing March 2021 Vol 26, No 3 111
diseases, and they emphasised the protection of the most
vulnerable members of society and the public by decreasing
the required movement of symptomatic individuals.
Attend Anywhere
In community nursing, a strong emphasis has been placed
on the use of telephone triage followed by video assessment
to reduce exposure to staff and patients, further preventing
an increase in the prolific ‘reproductive number,’ which is
indicative of the infection rate. In a qualitative study by
Donaghy et al (2019) investigating the use of telemedicine,
the results showed a positive reaction to the use of technology
to decentralise the patient-centric model and allow it to catch
up with the more modern requirements.
Attend Anywhere is a video conferencing medium
that supports the visual and audio assessment of a patient,
while limiting footfall in high-risk areas. This platform was
implemented nationwide by NHS England in 2018 and
has spread across 45 trusts and has been used effectively
since its inception in Melbourne, Australia, in 1998. In an
Australian study by Corden et al (2020) on the use of remote
assessments, the authors credited the lower infection rate to
remote assessing, and they recommended the use of remote
assessment mediums. They reported on their experience in
a dermatology setting, where 800 patients could be triaged,
assessed and treated using video assessment, which reduced
footfall and decreased the exposure risks and improved
patient satisfaction.
Due to the COVID-19 pandemic, trusts throughout the
UK were supported to rapidly roll out the Attend Anywhere
tool, which was funded by NHS England. The goal was to use
this medium amid the crisis, coinciding with Government
regulations for the vulnerable to shield. Beland et al (2020)
recounted the guidance of the banning of all but essential
visits into care homes across the UK due to the pandemic.
This guidance has only ever been encouraged on this scale
once before, amid a norovirus outbreak in Scotland (Currie et
al, 2016). With this being the new normal for the foreseeable
future, most trusts were required to continue to provide
nursing services, but had to be innovative in their delivery to
protect staff and patients. This is how virtual assessments in
care homes were born. However, this new way of working
was met with trepidation and apprehension by many staff who
found working through a pandemic already overwhelming
without the added pressure of a new assessment medium
being implemented. Therefore, the uses and effectiveness of
virtual assessment mediums were assessed during this time.
Evaluation of effectiveness
A recent evaluation of the provision of healthcare was
conducted over a 2-week period with an emphasis on
Attend Anywhere in care homes and a predominant focus on
community matron assessment. According to the findings, a
total of 30 visits were requested via the single point of contact
service between 20 and 31 July 2020. Some 23.333% patients
were seen in person; 60% were assessed via the telephone
and 16.66% were seen via Attend Anywhere. The 66.66% of
assessments completed via telephone or Attend Anywhere
represented a vast reduction in the footfall when compared
with previous audits, which had showed that 79% of visits
completed before March 2020 had been completed in person.
The results of this small snapshot of nursing care using
a video conferencing platform suggested a reduction in
footfall within the predominantly at-risk areas via the use
of telephone and video conferencing. These data indicate
that, during this time period, the use of such platforms was
positive and effective in enabling assessment and reduction
of exposure of both staff and patients, as face-to-face visits
were not required.
The avoidance of face-to-face visits is paramount. Many
care homes house patients with cognitive deficits, such as
dementia or Alzheimer’s disease, which warrant one-to-
one supervision (Livingston et al, 2017). In most cases, this
cannot be provided, and, therefore, the use of large communal
living spaces is required to ensure resident safety when they
wander through the home, experience agitation when
trying to redirect attention, attempt to physically engage
with other residents and touch various objects, which could
be dangerous. It has been suggested that these activities by
people with cognitive impairments significantly increase the
risk of rapid disease transmission (Killen et al, 2020; Suzuki
et al, 2020). Therefore, all recommended guidance for the
protection of vulnerable patients should be strictly adhered to.
Brown et al (2020) supported the use of alternative assessment
mediums in these cases. Corden et al (2020) went further
to highly recommend video assessment for complaints that
could be assessed visually, such as eye infections, rashes in areas
that will not indecently expose patients, infected wounds and
some new wounds (to give dressing advice). However, they
said that home visits were generally required in the case of
complaints that required audible assessment, such as a chest
infection or possible bowel obstruction, which also required
a manual assessment, including palpation (Corden et al, 2020).
Barriers to telemedicine
When reviewing wider data from the health community,
Unadkat et al (2020) reported that video consultation systems
can be impersonal, and difficulties with the speed of the IT
system were reported. Conversely, Connor et al (2020) highly
recommended the use of video assessment mediums, such as
Attend Anywhere or Telehealth, in the provision of healthcare
by allowing remote assessment of patients using electronic
communication tools. The authors said that such systems are
crucial in avoiding unnecessary attendance to hospitals and,
therefore, reduce contamination risk. In addition, there is the
benefit of not having to cancel appointments en masse to
adhere to regulations.
The morality of offering patients a diagnosis over a video
consultation, however, has been questioned by Humphreys et
al (2020), who considered the ethical implications of offering
a diagnosis without the usual support of specialist nurses or
other appropriate persons. Family members banned from visits
as per Government guidance cannot offer moral support or a
more simplified explanation to their loved ones (Gardner et
al, 2020). Consequently, Sorinmade (2020) proposed that this
must be considered by the diagnosing clinician, who must be
©
2
0
21
M
A
H
ea
lt
h
ca
re
L
td
PROFESSIONAL
112 British Journal of Community Nursing March 2021 Vol 26, No 3
sympathetic and conscious as to what is appropriate to discuss
over a video consultation. Further, family members joining
the call where appropriate must be supported in relation to
consent and lasting power of attorney (NHS England and
NHS Improvement, 2020).
Further obstacles to the use of video assessments have also
been described by staff and echoed by Gann (2020) and
Hammersly et al (2019), who reported that COVID-19 has
identified some communities, predominantly those of older
people living in care homes, as experiencing social, economic
and digital deprivation. To combat this, there are many digital
initiatives across the UK during the pandemic, such as Attend
Anywhere, telemedicine and IMedicine, which are supplying
devices and education in relation to digital skills, in order to
support the agenda that NHS England has implemented and
reduce footfall in care homes to limit exposure (Hollis et al,
2015). However, deprivation in care home communities is a
prevalent risk, and the complexities in different demographics
correlate with access to digital technology. This has been
highlighted by Holmes Finch and Hernández Finch (2020);
despite this being an American study, the data translate and
reflect the narrative in the global health community. The
authors described how more affluent areas have access to
modern technology supporting digital assessment as well as
having the means to financially support high internet usage.
Financial constraints when using virtual assessments have
been addressed (Brouwer et al, 2017), where it has been
reported that the technology is widely whitelisted, meaning
that large telephonic and broadband companies allow the
access of these digital services to be provided free so as not
to disadvantage anyone. Most recently, Vodaphone and 02
have whitelisted Attend Anywhere in the UK, making it free
to access.
Another barrier to the effective usage of the digital
consultation system is staff members’ literacy and IT skills.
Robinson et al (2020) and Visca et al (2020) reported on
the impact of digital inequalities in different healthcare
sectors in relation to the education level, and argued that
there is a consequent impact on patient vulnerability to
disparate healthcare. This is supported by earlier evidence,
which suggested that people with poor literacy skills do not
receive as effective healthcare, due to lack of understanding
or lack of ability to implement healthcare plans owing to
comprehension difficulty (Taylor et al, 2013). With regard
to telemedicine, it can be the case that patients are being
managed by staff who do not have compatible IT skills, and
this could be one of many obstacles in the provision of care
to the vulnerable (Blackburn et al, 2005).
The final threat to the implementation of the video
conferencing system is change management for healthcare
providers. Irrespective of sector, if change is to be effectively
implemented and successful, leaders in the field must be
advocates, lead by example and encourage usage (Zaman
et al, 2020). Opinions offered by some are that there is
evidence of positive digital leadership within trusts, but this
is not reflected at team level. Sheninger (2019) discussed the
importance of leaders in encouraging change and the use of
technology in the best interest of patients.
Sellars et al (2020) encouraged leaders to share positive
outcomes and opportunities with staff. In their review of the
use of virtual consultation mediums, they reported very few
patients as having difficulties with technology, and attendance
for virtual appointments was very high; in fact, it was higher
than that for face-to-face appointments. Further, 6685 miles
of travel, equivalent to 148 hours of travelling time, were saved
for patients, with savings for the total number of patients
amounting to £1767, not including the approximately
£33.56 that each patient may have saved by preventing loss
of earnings. Additionally, the environmental impacts were
massive, as carbon emissions were lowered by 4659 lb, which
is the equivalent of over 250 000 charges of a smartphone.
Ziebland and Wyke (2012) proposed sharing good news,
which positively impacts patients, and using it as a vehicle
to encourage change. Further, they suggested that sharing
data empowers the recipient to support change. Mannion
and Goddard (2001), however, warned that, although sharing
of data works positively in positive cultures, this practice in
some areas in which there is a lack of professional belief in
relation to the perception of the quality of data may have
the opposite effect to the one desired. They concluded that
informal verbal information is often better received and well
thought of in the encouragement of change. The collaborative
sharing of data should be a common practice according to the
Nursing and Midwifery Council (NMC) (2018), and change
is best supported with strong leadership and encouragement,
evidence of positive results and caution in areas in which the
validity of data will be questioned.
Summary
Virtual consultations allow face-to-face visits to be completed
in a safe way and in accordance with national guidance.
Overall, the evaluation of virtual consultation usage was found
to be positive, and was in line with feedback from the wider
health community. The reduced exposure risk to patients and
staff was paramount and outweighed any problems faced.
Problems such as connectivity issues could be rectified. A
recommendation from this evaluation is that other nurses
within the community nursing sphere should endeavour to
use virtual consultation mediums as an alternative whenever
it is possible and safe, in order to reduce exposure risk. Threats
to safe usage should be risk assessed, and appropriate action
should be taken to minimise risk.
Conclusion
The COVID-19 pandemic has forced the NHS to be
progressive and innovative in its delivery of healthcare. The
fatal nature of the SARS-CoV-2 virus is reflected in the
volume of care home deaths. To prevent further risk and
exposure, consultation mediums have to change and reflect
what is now required to keep patients safe. The evidence
shows that telephone and video assessment, which have been
in place for many years and have been used effectively, are a
possible option. Video assessment is relevant, now more than
ever, for staff working through a pandemic and attempting to
remain safe, as well as for vulnerable patients. The economic,
environmental and physical benefits of video assessments
©
2
0
21
M
A
H
ea
lt
h
ca
re
L
td
PROFESSIONAL
114 British Journal of Community Nursing March 2021 Vol 26, No 3
KEY POINTS
w The use of telephone triage and assessment and video assessment has
been present for much longer worldwide than it has been in the UK
w The use of technology, in particular platforms supporting the audio and
video assessment of a patient, reduce the risk to the patient or the wider
nursing community by reducing foot fall into care homed
w At the author’s trust, a video consulting platform called Attend Anywhere
provided positive outcomes for patients, while also providing cost efficacy
w A consideration is that the lack of face-to-face appointments would
increase the vulnerability of patients with digitally deprivation, which is a
worldwide risk
w The widespread use of technology in healthcare must be supported by
effective change management, which must be driven by leaders
outweigh any risks, which can be managed effectively for
the patients who reside in care homes. Thus, such alternative
assessments methods should be encouraged wherever safe.
Moving forward in an uncertain world, technology will be
the basis of many healthcare assessments. As the famous author
Matt Mullenweg once wrote, ‘Technology is best when it
brings people together,’ and this technology will certainly
allow people to come together in a new way. BJCN
Accepted for publication: January 2021
Conflicts of interest: none
Baud D, Qi X, Nielsen-Saines K, Musso D, Pomar L, Favre G. Real estimates of
mortality following COVID-19 infection. Lancet Infect Dis. 2020; 20(7):773.
https://doi.org/10.1016/s1473-3099(20)30195-x
Beland LP, Brodeur A, Wright T. COVID-19, stay-at-home orders and employment:
evidence from CPS data. 2020. Institute of Labor Economics. https://tinyurl.
com/y6oddclb (accessed 22 January 2021)
Blackburn C, Read J, Hughes N. Carers and the digital divide: factors affecting
Internet use among carers in the UK. Health Soc Care Community. 2005;
13(3):201–210. https://doi.org/10.1111/j.1365-2524.2005.00547.x
Brouwer N, Heck A, Smit G. Proctoring to improve teaching practice. MSOR
Connections. 2017; 15(2). https://doi.org/10.21100/msor.v15i2.414
Brown EE, Kumar S, Rajji TK, Pollock BG, Mulsant BH. Anticipating and
mitigating the impact of COVID-19 pandemic on Alzheimer’s disease and
related dementias. Am J Geriatr Psychiatry. 2020; 28(7):712–721. https://doi.
org/10.1016/j.jagp.2020.04.010
Car J, Sheikh A. Telephone consultations. BMJ. 2003; 326(7396):966–969. https://
doi.org/10.1136/bmj.326.7396.966
Comas-Herrera A, Zalakaín J, Litwin C, Hsu AT, Lane N, Fernández JL. Mortality
associated with COVID-19 outbreaks in care homes: early international
evidence. 2020. International Long-Term Care Policy Network, LTC Responses
to COVID. https://tinyurl.com/rgq7747 (accessed 22 January 2021)
Connor MJ, Winkler M, Miah S. COVID‐19 pandemic–is virtual urology clinic the
answer to keeping the cancer pathway moving?. BJU Int. 2020; 125(6):E3–E4.
https://doi.org/10.1111/bju.15061
Corden E, Rogers AK, Woo WA, Simmonds R, Mitchell CD. A targeted response
to the COVID‐19 pandemic: analysing effectiveness of remote consultations
CPD REFLECTIVE QUESTIONS
w What are the benefits and disadvantages of providing assessments over
video consultations?
w How can your trust support the use of technology in assessments?
w What tasks within the remit of your role would you be able to complete
over video? For what tasks might video consulting be inapplicable?
for triage and management of routine dermatology referrals. Clin Experiment
Dermatol. 2020; 45(8):1047–1050. https://doi.org/10.1111/ced.14289
Currie K, Curran E, Strachan E, Bunyan D, Price L. Temporary suspension of
visiting during norovirus outbreaks in NHS Boards and the independent care
home sector in Scotland: a cross-sectional survey of practice. J Hosp Infect. 2016;
92(3):253–258. https://doi.org/10.1016/j.jhin.2015.10.018
Donaghy E, Atherton H, Hammersley V et al. Acceptability, benefits, and challenges
of video consulting: a qualitative study in primary care. Br J Gen Pract. 2019;
69(686):e586–e594. https://doi.org/10.3399/bjgp19X704141
Duffy S, Lee TH. In-person health care as option B. N Engl J Med. 2018; 378:104–
106. https://doi.org/10.1056/NEJMp1710735
Eurosurveillance Editorial Team. Updated rapid risk assessment from ECDC on the
novel coronavirus disease 2019 (COVID-19) pandemic: increased transmission in
the EU/EEA and the UK. Euro Surveill. 2020; 25(10):2003121. https://dx.doi.
org/10.2807%2F1560-7917.ES.2020.25.12.2003261
Fauci AS, Lane HC, Redfield RR. COVID-19—navigating the uncharted. N Engl J
Med. 2020; 382(13):1268–1269. https://doi.org/10.1056/nejme2002387
Field RE, Afzal I, Dixon J, Patel VR, Sarkar P, Marsh JE. Cohort profile: preliminary
experience of 500 COVID-19 positive cases at a South West London District
General Hospital. medRxiv. 2020. https://doi.org/10.1101/2020.04.28.20075119
Holmes Finch W, Hernández Finch ME. Poverty and COVID-19: rates of incidence
and deaths in the United States during the first 10 weeks of the pandemic. Front
Sociol. 2020; 5:47. https://doi.org/10.3389/fsoc.2020.00047
Gann B. Combating digital health inequality in the time of coronavirus. J Consumer
Health Internet. 2020; 24(3):278–284. https://doi.org/10.1080/15398285.202
0.1791670
Gardner W, States D, Bagley N. The coronavirus and the risks to the elderly in long-
term care. J Aging Soc Policy. 2020; 32(4-5):310–315. https://doi.org/10.1080
/08959420.2020.1750543
Greenhalgh T, Koh GCH, Car J. COVID-19: a remote assessment in primary care.
BMJ. 2020; 368:m1182. https://doi.org/10.1136/bmj.m1182
Grubaugh ND, Hanage WP, Rasmussen AL. Making sense of mutation: what D614G
means for the COVID-19 pandemic remains unclear. Cell. 2020; 182(4):794–
795. https://doi.org/10.1016/j.cell.2020.06.040
Hammersley V, Donaghy E, Parker R et al. Comparing the content and quality
of video, telephone, and face-to-face consultations. Br J Gen Pract. 2019;
69(686):e595–e604. https://doi.org/10.3399/bjgp19X704573
Hollis C, Morriss R, Martin J et al. Technological innovations in mental healthcare:
harnessing the digital revolution. Br J Psychiatry. 2015; 206(4):263–265. https://
doi.org/10.1192/bjp.bp.113.142612
Humphreys J, Schoenherr L, Elia G et al. Rapid implementation of inpatient
telepalliative medicine consultations during COVID-19 pandemic. J
Pain Symptom Manage. 2020; 60(1):e54–e59. https://doi.org/10.1016/j.
jpainsymman.2020.04.001
Killen A, Olsen K, McKeith IG et al. The challenges of COVID‐19 for people with
dementia with Lewy bodies and family caregivers. Int J Geriatr Psychiatry. 2020.
https://doi.org/10.1002/gps.5393
Livingston G, Barber J, Marston L et al. Prevalence of and associations with agitation
in residents with dementia living in care homes: MARQUE cross-sectional
study. B J Psych Open. 2017; 3(4):171–178. https://doi.org/10.1192/bjpo.
bp.117.005181
Mannion R, Goddard M. Impact of published clinical outcomes data: case
study in NHS hospital trusts. BMJ. 2001; 323(7307):260–263. https://dx.doi.
org/10.1136%2Fbmj.323.7307.260
McMichael TM, Currie DW, Clark S et al. Epidemiology of COVID-19 in
a long-term care facility in King County, Washington. N Engl J Med. 2020;
382(21):2005–2011. https://doi.org/10.1056/NEJMoa2005412
Milusheva S. Managing the spread of disease with mobile phone data. J Develop
Econ. 2020; 147:102559. https://doi.org/10.1016/j.jdeveco.2020.102559
NHS England, NHS Improvement. Freedom of information: Attend Anywhere.
2020. https://tinyurl.com/yylhts22 (accessed 22 January 2021)
Nursing and Midwifery Council. The code: professional standards of practice and
behaviour for nurses, midwives and nursing associates. 2018. https://www.nmc.
org.uk/standards/code/ (accessed 24 February 2021)
Pierce BS, Perrin PB, Tyler CM, McKee GB, Watson JD. The COVID-19
telepsychology revolution: a national study of pandemic-based changes in
US mental health care delivery. Am Psychol. 2020; 76(1):14–25. https://doi.
org/10.1037/amp0000722
Reeves JJ, Hollandsworth HM, Torriani FJ et al. Rapid response to COVID-
19: health informatics support for outbreak management in an academic
health system. J Am Med Informat Assoc. 2020; 27(6):853–859. https://doi.
org/10.1093/jamia/ocaa037
©
2
0
21
M
A
H
ea
lt
h
ca
re
L
td
PROFESSIONAL
British Journal of Community Nursing March 2021 Vol 26, No 3 115
Rekatsina M, Paladini A, Moka E et al. Healthcare at the time of COVID-19:
a review of the current situation with emphasis on anesthesia providers. Best
Pract Res Clin Anaesthesiol. 2020; 34(3):539–551. https://doi.org/10.1016/j.
bpa.2020.07.002
Robinson L, Schulz J, Khilnani A et al. Digital inequalities in time of pandemic:
COVID-19 exposure risk profiles and new forms of vulnerability. First Monday.
2020; 25(7). https://doi.org/10.5210/fm.v25i7.10845
Sellars H, Ramsay G, Sunny A, Gunner CK, Oliphant R, Watson AJ. Video
consultation for new colorectal patients. Colorectal Dis. 2020; 22(9):1015–1021.
https://doi.org/10.1111/codi.15239
Shehata M, Zhao S, Gill P. Epidemics and primary care in the UK. Fam Med
Community Health. 2020; 8(2):e000343. https://doi.org/10.1136/
fmch-2020-000343
Sheninger E. Digital leadership: changing paradigms for changing times. Thousand
Oaks (CA): Corwin Press; 2019
Smith AC, Thomas E, Snoswell CL et al. Telehealth for global emergencies:
Implications for coronavirus disease 2019 (COVID-19). J Telemed Telecare. 2020;
26(5):309–313. https://doi.org/10.1177%2F1357633X20916567
Sorinmade OA. Highlighting some of the challenges COVID-19 has posed to the
European Convention on Human Rights. B J Psych Bull. 2020; 44(4):177–178.
https://doi.org/10.1192/bjb.2020.64
Suzuki M, Hotta M, Nagase A et al. The behavioral pattern of patients with
frontotemporal dementia during the COVID-19 pandemic. Int Psychogeriatr.
2020; 10:1–4. https://dx.doi.org/10.1017%2FS104161022000109X
Taylor SP, Nicolle C, Maguire M. Cross-cultural communication barriers in
health care. Nurs Stand. 2013; 27(31):35–43. https://doi.org/10.7748/
ns2013.04.27.31.35.e7040
Unadkat SN, Andrews PJ, Bertossi D et al. Response to Whitehead et al. re:
‘Recovery of elective facial plastic surgery in the post-coronavirus disease 2019
era: recommendations from the European Academy of Facial Plastic Surgery Task
Force’. Facial Plast Surg Aesthetic Med. 2020; Jun 10. https://doi.org/10.1089/
fpsam.2020.0289
Vannabouathong C, Devji T, Ekhtiari S et al. Novel coronavirus COVID-19: current
evidence and evolving strategies. J Bone Joint Surg Am. 2020; 102(9):734–744.
https://doi.org/10.2106/jbjs.20.00396
Visca D, Tiberi S, Pontali E, Spanevello A, Migliori GB. Tuberculosis in the time of
COVID-19: quality of life and digital innovation. 2020; 56(2):2001998. https://
dx.doi.org/10.1183%2F13993003.01998-2020
Zaman T, Viragos A, Zygiaris S. One frame at a time: the dynamics of a change
project, end-users and digitally transformed NHS. Acad Manage Proc. 2020;
2020(1):19673. https://doi.org/10.5465/AMBPP.2020.19673abstract
Zhou X, Snoswell CL, Harding LE et al. The role of telehealth in reducing the
mental health burden from COVID-19. Telemed J E Health. 2020; 26(4):377–
379. https://doi.org/10.1089/tmj.2020.0068
Ziebland SUE, Wyke S. Health and illness in a connected world: how might sharing
experiences on the internet affect people’s health?. Milbank Q. 2012; 90(2):219–
249. https://doi.org/10.1111/j.1468-0009.2012.00662.x
Fundamental Aspects of Caring for the Person with Dementia
To order please call: 01722 716935 or visit our website www.quaybooks.co.uk
Fundamental Aspects of Caring for the Person with Dementia: ISBN: 978-1-85642-303-8; publication 2006; £19.99
By Kirsty Beart
This book begins by asking you to try to imagine the life you have now changing beyond
all recognition. One day you wake up and don’t know where you are. You ask someone
near you where you are but they seem unable to understand your question. Why do
they not understand, what is wrong with them? It is hard to contemplate this and to fully
comprehend the emotional turmoil caused by the symptoms of a dementia type illness.
This book has been written with the intention of helping its readers to understand the
perspective of the person who has been labelled as suffering with dementia, as well as
that of the carers and the professionals. It is split into two sections to help the reader
identify the parts they need to read at different times or for varying purposes. Section 1
offers information and debate about the theoretical issues and explanations of dementia
and memory loss. Section 2 moves into the more practical side of this text. Many areas
of concern for carers and professionals alike are similar and this section brings their
ideas and perspectives together so that they might be able to bene� t from each other.
Fundamental Aspects of
Caring for the Person
with Dementia
Kirsty Beart
Fundamental Aspects of Nursing series
Fundam
ental Aspects of Caring for the Person with Dem
entia
K
irsty B
eart
www.quaybooks.co.uk
About the book
This book begins by asking you to try to imagine the life you have now
changing beyond all recognition. One day you wake up and don’t know
where you are. You ask someone near you where you are but they seem
unable to understand your question. Why do they not understand, what is
wrong with them? It is hard to contemplate this and to fully comprehend
the emotional turmoil caused by the symptoms of a dementia type illness.
This book has been written with the intention of helping its readers
to understand the perspective of the person who has been labelled
as suffering with dementia, as well as that of the carers and the
professionals. It is split into two sections to help the reader identify the
parts they need to read at different times or for varying purposes.
Section 1 offers information and debate about the theoretical issues and
explanations of dementia and memory loss. This includes explanations
of what dementia actually is and where it comes from in the first place.
Section 2 moves into the more practical side of this text. Many areas of
concern for carers and professionals alike are similar and this section
brings their ideas and perspectives together so that they might be able to
benefit from each other.
About the author
Kirsty Beart is a Lecturer in Mental Health Nursing at De Montfort
University, Leicester, UK.
Other titles in the Fundamental Aspects
of Nursing series include:
Adult Nursing Procedures
Caring for the Acutely Ill Adult
Community Nursing
Complementary Therapies for Health Care Professionals
Gynaecology Nursing
Legal, Ethical and Professional Issues in Nursing
Men’s Health
Tissue Viability Nursing
Palliative Care Nursing
Women’s Heath
Series Editor: John Fowler
FA Dementia cover.indd 1 7/8/06 11:51:00
( 020 7501 6780 88 bjcn@markallengroup.com @BJCommunityNurs
Have an idea for BJCN?
Copyright of British Journal of Community Nursing is the property of Mark Allen Publishing
Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv
without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
Available online at www.ijpe-online.com
vol. 17, no. 3, March 2021, pp. 276-288
DOI: 10.23940/ijpe.21.03.p3.276288
* Corresponding author.
E-mail address: 297866942@qq.com; hs0317@163.com.
Code Confusion in White Box Crowdsourced Software Testing
Run Luo*, Song Huang*, Hao Chen, and MingYu Chen
Command and Control Engineering College, Army Engineering University of PLA, Nanjing, 210007, China
Abstract
In recent years, crowdsourcing software testing as a new testing service mode has been widely concerned. However, white box
crowdsourcing software testing is often regarded as an insecure testing service mode. The main threat comes from unknown attacks in the
crowdsourcing environment, which leads to the risk of source code leakage in white box testing. This paper discusses the weakness of white
box software testing in crowdsourcing software testing, as well as the possible mode of attack. This paper proposes to use code obfuscation
technology as a solution to this kind of attack and analyzes the impact of code obfuscation technology on crowdsourcing testing. This paper
is the first attempt at using code obfuscation technology in white box crowdsourcing software test task protection.
Keywords: crowdsourcing software testing; code confusion; software security; safety evaluation
© 2021 Totem Publisher, Inc. All rights reserved.
1. Introduction
In recent years, crowdsourcing software testing has gained more and more attention in the field of black box software testing,
such as web testing [1], APP testing [2-3], and QoE testing [4]. Because software testing needs a lot of manpower, time, and
money, crowdsourcing mode can provide a lot of cheap labor, and a lot of attention is conducive to quickly finding all kinds
of software problems. A lot of practice has proven that crowdsourcing software testing has a good effect in improving product
quality and finding software defects [5-7]. Compared with black box software testing, the research progress of white box
software testing in crowdsourcing software testing field is slower. The main reason is that white box crowdsourcing software
testing requires more testing workers, and white box testing also lacks a credible testing environment.
White box crowdsourcing software testing, by calling crowdsourcing testing workers, uses crowdsourcing platform
testing tools to design, write and execute test cases. The employer’s distrust of the crowdsourcing software testing environment
has become a major factor restricting white box crowdsourcing software testing. This distrust comes from the ignorance of
potential attackers in crowdsourcing software testing platform. These attackers can sneak in crowdsourcing workers to steal
the information of the project to be tested. Presently, the relevant research on crowdsourcing software testers mainly focuses
on the research of the workers’ ability, that is, the task is pushed to the right people to complete. It is a long-term and arduous
task to study the credibility of test workers, which needs crowdsourcing test platforms for long-term monitoring [8-9].
We believe that the crowdsourcing test platform and the employer should establish such an understanding: “don’t
completely trust the crowdsourcing test workers”, so this paper is more inclined to use relevant technical means to “encrypt”
the test items before the crowdsourcing test tasks are distributed, that is, targeted transformation test items.
Code security is a big challenge in the field of white box crowdsourcing software testing. White box testing needs to
release software source code to testers to meet the basic conditions of white box testing. Under this condition, white box
crowdsourcing software testing usually decomposes the original project, builds multiple test task packages, and then
distributes these task packages to test workers. The main idea of this kind of method is to break the whole into parts, disclose
the parts and protect the whole. However, there are some defects in code protection by only decomposing the original project,
which will be discussed in Section 3.
http://www.ijpe-online.com/
Code Confusion in White Box Crowdsourced Software Testing 277
In order to further enhance the security of white box crowdsourcing software testing, this paper proposes to apply the
idea and technology of code obfuscation to the field of white box crowdsourcing software testing. At the same time, in order
to explain the reliability of our method, we also design a brute force cracking algorithm for white box crowdsourcing software
test package and a restore algorithm based on clue search. By comparing the restoration effect of the project before and after
the confusion, we can find that the task package after the confusion is much more difficult to restore than the original task. At
the same time, in order to verify the impact of code obfuscation on test results, this paper also compares the reusability of test
cases before and after obfuscation, and it finds that the reusability of test code will decrease with an increase in obfuscation
intensity.
2. Background
White box crowdsourcing software testing is a new crowdsourcing software testing service mode. This mode aims to use the
testing workers and testing resources provided by crowdsourcing software testing platform to carry out software testing such
as unit testing, code review, performance testing, etc. [10]., which need to read the source code. At present, the research on
crowdsourcing software testing application is often carried out in the field of black box testing, such as web crowdsourcing,
mobile crowdsourcing, GUI crowdsourcing [11], etc. At the same time, the related research on crowdsourcing software testing
process itself mainly focuses on crowdsourcing task design, crowdsourcing workers call, test report de duplication and so on.
The main goal of these studies is to improve testing efficiency [12-13]. Few people pay attention to the security risks in
crowdsourcing software
testing.
Compared with white box crowdsourcing testing, black box crowdsourcing testing has excellent security, except for the
reason that the training cost of black box crowdsourcing is lower than that of white box crowdsourcing. From the perspective
of security, the related technology of code protection for black box programs has been relatively mature. Through the research
of software encryption and decryption, we find that code protection technologies such as digital watermarking [14], instruction
set modification, and virtualization are often used to create an encrypted black box software to prevent the code information
obtained by the cracker [15]. However, white box testing directly exposes the code, and these traditional black box and “shell”
technologies are difficult to apply to the current white box crowdsourcing testing scenarios. Therefore, to ensure the security
of white box test code in crowdsourcing mode, we need to study specific protection technology.
White box crowdsourcing software testing highlights the security risks in crowdsourcing software testing. The employer
needs to release the relevant code required for testing to the crowdsourcing software testing platform. Because the workers
participating in the test are unknown groups scattered in the Internet, the crowdsourcing platform can not guarantee the
reliability of personnel. In the black box crowdsourcing software testing scenario, the employer allows the testing workers to
test the whole software to be tested. In the white box crowdsourcing software testing scenario, in order to prevent the testing
workers from divulging all the employer’s code, the employer decomposes the testing task into several smaller subtasks by
decomposing the original project. Each worker is only allowed to test part of the code that has been allocated. For example,
as shown in Figure 1, a complex program can be decomposed into multiple test tasks. This kind of method is called code
segmentation under white box crowdsourcing testing. The idea of code segmentation is mainly used to hide the relationship
between the local code and the whole code in the program, and to create information fragments to protect the security of the
whole program. At the same time, the split program can be more suitable for the small task mode of crowdsourcing platform.
Figure 1. Code segmentation
For the convenience of analysis, the example language selected in this paper is the golang programming language (later
called Go language) designed by Google, which is known as the 21st century C language. Golang is a static strongly typed,
compiled, parallel programming language with garbage collection function [16]. There is no concept of class and inheritance
278 Run Luo, Song Huang, Hao Chen, and MingYu Chen
in golang’s design. It mainly uses the concept of interface to realize polymorphism. At the syntax level, the reuse of golang is
based on the package, that is, the main package where the main() function is located is used as the entry of the program.
Whenever a function calls a function under this package, it is called directly by the function name. When it needs to call the
function of another package, it needs to import the package and use “imported package name + function name” to call.
The minimum test unit of Go program is function. Go functions can be divided into two categories: one is called methods
with callers and the other is functions without callers. For a function f to be tested, its package is p and its dependent package
set is DP (dependent package). The function can be tested and run under the following conditions: F itself is complete, the
declaration called by F under its package P is complete, and the declaration called by F in its dependency set DP is complete.
Therefore, the most important components of Go program are declaration and package. For statically typed languages, code
segmentation can prune the package P and its dependent package set DP where the function under test f is located through
static analysis technology. It can delete the related declarations that f does not use to get a new package Pd and a new dependent
package set DPd. Then, the code to be tested and its dependencies are encapsulated as a crowdsourcing test task and put into
the crowdsourcing software test environment as a test task.
3. Weaknesses of Code Segmentation
In this section, we will analyze the defect of the code segmentation idea and further illustrate how attackers can use this defect
to restore the decomposed program to the original program. White box crowdsourcing software test code segmentation mainly
refers to: the employer can extract the software fragments that can be tested through static analysis [17-19] and other related
technologies. These fragments are mainly composed of one or more functions to be tested and contain the dependent modules
required by these functions to ensure that they can be tested in a given test environment [20]. However, there are some defects
in decomposing source code to design task package:
1) The number of task packages that can be decomposed is limited by the size of the software to be tested. Fewer test
units means smaller subcontracting, which also means reduction of the difficulty of restoration.
2) Only after segmentation without other processing, as long as all the task packages can be collected in the crowd
testing environment, can the original project be restored.
3.1. Crowdsourcing Task Restoration based on Clues
Suppose a go program contains n test units, then the program can build up to n test tasks. Through code segmentation
technology, these n task packages only contain necessary program information. However, without modifying the program
code, these n tasks contain all the declarations of the original program, which means that the original program can be restored
as long as the declarations are arranged in a certain order. An ideal condition for restoring the original program is that the
original program is divided directly without modifying the original code.
The task package after code segmentation often contains a lot of clues because crowdsourcing employer does not modify
the original code. For example, different task packages contain the same function, or different task packages use the same
declaration, or some similar code style. For example, task package A contains structure graph and defines and functions
addvex and AddEdge. Task package B also contains structure graph and defines functions deletevex and deleteedge.
Obviously, these two task packages are splitting the same go code package. Based on such clues, attackers can quickly merge
task packages to restore the original program (Figure 2). This kind of algorithm analyzes the relationship between task
packages by comparing the statements in n task packages.
Figure 2. Code restore
Code Confusion in White Box Crowdsourced Software Testing 279
It can be seen that although the code segmentation idea decomposes the program into smaller sub fragments, there are
many clues between these sub fragments, which can effectively guide the attacker to restore the original program. Attackers
can use their own unlimited personal time to piece together the code. In order to improve code segmentation, we first need to
analyze the difficulty and cost of code being restored. Thread based restoration can piece smaller program fragments into
larger ones. But when there is no clue or the clue is not obvious, it is often difficult to restore the program. Especially for
thread based restoration, because the thread is a static module such as name and variable, it needs to be manually reconfirmed
by the attacker, so a large part of the overhead is spent on the time of manual confirmation.
3.2. Crowdsourcing Task Restoration based on Directed Acyclic Graph
Crowdsourcing task restoration based on directed acyclic graph is essentially a violent restoration method, which can ignore
the clues between program fragments. It is known that a go program with n declarations is put into the crowdsourcing software
testing environment after code segmentation. Suppose that an attacker obtains all the task packages. Then, the attacker has the
ability to restore the original go language program. First of all, go program can be regarded as a directed acyclic graph
composed of multiple packages. Arrow A points to B, indicating that package B depends on package A. As long as the
corresponding declaration is put into the corresponding package, the original program can be restored. The attacker can regard
the program as a labeled directed acyclic graph with m different points and k edges, and fill n statements into m points. In
many permutations, there must be effective permutations so that the new program is equivalent or completely consistent with
the original program.
Suppose the attacker obtains three non-duplicate statements A, B, and C, and now puts these statements into a package
arrangement. There are 5 cases where the declarations are arranged into packages (Figure 3): 3 in the same package, 3 in 2
packages and 3 in 1 package. A go program with n declarations contains at least one package and at most N packages. When
the attacker doesn’t know the distribution of claims, he first needs to determine the number of packets M. After the number of
packages m is determined, the declaration restore will be changed to put n declarations into m packages. At this time, the
dependencies between packages have not been considered. These packages can be regarded as the same lattice, but the
declarations are different from each other. The problem can be seen as n different balls filling into m identical baskets. The
number of permutation combinations produced by such permutations is called Stirling number (S):
S(n,m)=S(n-1,m-1)+S(n-1,m)*m (n>1,1<=m
Figure 3. Declaration in different packages
Table 1. Package generation algorithm
Algorithm: package generation algorithm
function: DeclInPkgs
Input: declaration list of N declarations
Output: package list of M packages
1. func DeclInPkgs (str [ ]string, m int) [ ][ ][ ]string{ 280 Run Luo, Song Huang, Hao Chen, and MingYu Chen
The Stirling number represents the kind of N declarations into m packages. The specific permutation process is calculated
by an O (n!) permutation function. If n statements are put into m packages, it can be regarded as the solution of decomposing
positive integer n into m positive integers. For example, if 4 declarations are put into 2 packages, there are two results: two
packages can have 2 declarations each; or one package can have 3 declarations, and the other package can have 1 declaration.
The specific packet arrangement algorithm is shown in the following Table 1.
Figure 4 shows that a change of packet permutation results in the interval of packet number 𝑚 ∈ [1, 𝑛] and the second permutation and combination to construct the declaration as a package can be used to sort violently regardless of the clues in
the program. The main cost of the algorithm is to make a full permutation of the declaration sort, so the problem can be solved
in polynomial time.
Figure 4. Relationship between declaration, package and Stirling number
After all the packages are determined, we only need to further establish the dependency graph relationship between
packages to restore the program. The essence of dependency graph of go language program is a directed acyclic graph. The
calculation formula of the kind of directed acyclic graph with n nodes is as follows:
𝑎(𝑛) = ∑ (−1)𝑚+1 ( 𝑛
𝑚
)
0≤𝑚≤𝑛
⋅ 2𝑚(𝑛−𝑚) ⋅ 𝑎𝑛−𝑚
Given that a directed acyclic graph has n vertices, then the number of edges in the graph is 𝑒 = [0, 2
]. For example,
the result of a directed acyclic graph with two points is shown in Figure 5. The number of edges in a directed acyclic graph
with N + 1 vertices is 𝑒 = [0, 2 with n vertices and m edges added with [0, n-1] edges. Figure 6 shows the result of adding a point to a directed acyclic graph
from A to B.
Figure 5. Directed acyclic graphs with two points
When a directed acyclic graph with n-1 points is extended to n points, the relationship between the nth point and the
existing n-1 points has three cases: the new point to the existing points, the existing points point to the new point, and there
is no edge between the two points. There are 3𝑛−1 kinds of such relationships, and the relationships of these edges can be shown in the graph represented by Figure 7 shows an increase of edges of a directed acyclic graph with A pointing to B in C.
The left child of the tree indicates that the new point points to the existing point, the middle child of the tree indicates boundless,
and the right child indicates that the existing point points to the new point. A path from the root node to the lowest leaf node
0
50
100
150
200
250
300
350
400
0 1
2 3
4
5
6
7
8
1 3 5 7 9 11 13 15 17 19 21 23 25 27
declaration package Stirling Code Confusion in White Box Crowdsourced Software Testing 281
is the relationship of edges in a new graph. And obviously, the automatic generation of directed acyclic graph with n points
is a recursive process. Every time an existing graph generates a batch of new graphs, it will perform 3𝑛−1 calculations.
Therefore, to complete all the directed acyclic graph calculations of [1, n], 3 2 calculations are performed, that is, the time
complexity of the algorithm is O(𝑎𝑛).
Figure 6. Three points constitute a directed acyclic graph
Table 2. Directed Acyclic Graph Create Algorithm
Algorithm: Directed Acyclic Graph Create
Function: DAGCreate
Input: Directed acyclic graph olddag with n-1 vertices
Output: The set newdag of directed acyclic graphs with n vertices
1. func DAGCreate (OldDAG *OLGraph) []*OLGraph{
2. result []*OLGraph
3. Create Trie(n-1) // Create a trident tree 5. for _,i:=range NewEdges{
6. g=CopyOLGraph(OldDAG) //Copy the Graph
7. NewDAG=AddNewEdges(g)// Add all new edges 10. }
11. return result
12. }
Figure 7. Trigeminal tree represents the relationship between two points
According to the above analysis, it can be seen that the reduction of statements to packages and the generation of directed
acyclic graphs from the relations between packages are solvable in super polynomial time without using clue analysis program.
As the number of claims increases, the cost of violent restoration will increase rapidly. The number of final programs b(n)
consists of two parts: one is the number of packets that can be composed of declaration numbers, and the other is the total
number of labeled directed acyclic graphs [21] that can be composed of these packets.
𝑏(𝑛) = ∑ 𝑆
𝑛 𝑚=1
(𝑛, 𝑚) ⋅ 𝑎(𝑚) (𝑛 ≥ 1)
The difficulty of restoration can be regarded as the reciprocal of the proportion of the graph that makes the program
equivalent to the original program in all graphs. The essence of clue based restoration is to find the relationship between
statements by searching, so as to generate the corresponding subgraph, which reduces the amount of calculation. Therefore,
the main shortcomings of the segmentation idea are:
1) The code can be restored after segmentation, and the difficulty of restoration is related to the number of fragments 282 Run Luo, Song Huang, Hao Chen, and MingYu Chen
2) After segmentation, the clues between codes will greatly reduce the difficulty of restoration. The main reason is to reducing the uncertainty of the graph. The ability to find clues depends on the efficiency of the attacker’s analysis
algorithm.
4. Approach In the previous section, we pointed out the shortcomings of the code decomposition idea. Attackers can restore it according
to clues or violence by collecting task packages. In this section, in order to increase the difficulty of cracking and even affect
the results of cracking, we propose to introduce code obfuscation based on existing subcontracting to improve the security of
white box crowdsourcing software testing tasks.
4.1. Code Obfuscation Theory
Definition of code obfuscation[15]: code obfuscation is to convert program P into an equivalent program T(P) through
obfuscation transformation T. Let i be an element of the set I of all the inputs of program P, only if ∀ⅈ𝜖𝐼: 𝑇(𝑃)(ⅈ) = 𝑃(ⅈ) then terminated or cannot be terminated, then T(P) is wrongly terminated or cannot be terminated. Otherwise, T(P) must abort and
produce the same output as P. In the white box crowdsourcing software test environment, code obfuscation must guarantee
the UT of P test set: ∀ⅈ𝜖𝑈𝑇: 𝑇(𝑃)(ⅈ) = 𝑃(ⅈ).
Figure 8. Code Obfuscation
Common code obfuscation methods are: (1) layout obfuscation – changing the readability of source code by modifying
function name and variable name. (2) Control flow confusion mainly refers to modifying loop statements – judging statements
and providing opaque predicates to change the control flow to a certain extent. (3) Data obfuscation refers to the obfuscation
of numerical value or data structure in program operation. (4) Preventive obfuscation, through the study of some known anti
obfuscation software, provides some improvement programs for existing obfuscation. The common preventive obfuscation is
to create a special instruction set in the virtual machine.
By modifying the source code, code obfuscation can effectively resist the cracking of the source code by reverse tools.
Generally speaking, code obfuscation mainly acts on the completed program and is mainly used to add “shell” to the black
box program. We think that its special idea can also be used in the special environment of white box crowdsourcing software
testing. Obfuscated Target: any form of code protection that cannot achieve comprehensive protection, the defender designs
the corresponding protection scheme according to the known attack means. Therefore, before code obfuscation, we must first
identify the object or target to be protected. The main purpose of white box crowdsourcing software testing is to hope that
testers can write test cases. We are willing to complete the part that workers are assigned, but we are opposed to attackers
getting larger units. A larger unit means a more complete module of the original program, so the obfuscation goal is to hide
the relationship between these modules so that the computational cost of restoration is greater and even the original program
cannot be restored. The test cases obtained in crowdsourcing software testing environment can effectively affect the
unambiguous programs.
Confusion evaluation index: the traditional evaluation index of code confusion is mainly proposed by Collberg, which
is composed of Intensity, Flexibility, Concealment and Cost [15,22]. Intensity mainly refers to the complexity of the
program after confusion, which will be more complex. Elasticity refers to the resistance of the program to the analyzer after
obfuscation. The stronger the resistance, the better the obfuscation algorithm. The stronger the concealment, the less likely
the analyst is to realize that the program is confused. The cost refers to the additional cost of the confused program when Code Confusion in White Box Crowdsourced Software Testing 283
compared with the original program. This value will not be less than the cost of the unambiguous program itself. Therefore,
the cost should be a small difference to prevent the confusion of the program running too much cost.
In the new scenario of white box crowdsourcing software testing, new requirements for code obfuscation are put forward.
1) Intensity: first of all, the intensity needs to be paid special attention in this environment. If the program is modified intensity needs to be as low as possible.
2) Flexibility: as an evaluation index of code obfuscation algorithm against analysis tools, resilience is still an important 3) Concealment: concealment, as a special index to reduce the possibility of confusion, should also be retained. the overhead caused by obfuscation in the test process can be ignored. Runtime overhead is not an indicator of
crowdsourcing white box software testing.
5) Crowdsourcing Cost: although we don’t pay attention to runtime overhead, it is replaced by cost, which is the cost programs. The cost is mainly the cost of time, money or manpower.
6) Test Effectiveness: the fundamental purpose of white box crowdsourcing software testing is to collect test cases. confusion. If code obfuscation results in too many modifications to the program, and the test cases cannot be reused
in the original program, then such obfuscation is invalid. Therefore, such as control flow obfuscation, data
obfuscation and other obfuscation algorithms that directly insert into the source code to change the running results
of the source code often make the test results unconvincing.
4.2. Code Obfuscation Approach
Under the condition of white box crowdsourcing software testing. Each task package contains complete testable code
fragments. In order to ensure the smooth progress of the test, the employer must ensure the readability of these codes.
Traditional code obfuscation techniques are often used to resist code decompilation and increase code reading difficulty. Now,
in the white box crowdsourcing environment, we hope that through the code obfuscation technology, we can make the program
difficult to be restored as a whole as far as possible, and at the same time do not damage the readability of the code as far as
possible. Layout obfuscation is used in a large number of obfuscation tools. Although the original code information is modified,
it will not affect the operation of the program, but it will make it difficult for attackers to read and understand. Layout
obfuscation is considered as an unreliable obfuscation in the traditional code obfuscation. The main reason is that modifying
the names of variables and functions can not resist the analysis of memory, and its security is not high. The main reason is
that the internal code execution of the released black box program has been fixed. Reverse analysis can basically ignore the
code layout, which only affects the reader’s experience. Is layout confusion effective in crowdsourcing mode?
White box crowdsourcing software testing divides the code and then puts it into the test environment. The task package
of these code fragments contains testable elements, but we hope that after the attackers merge, these larger code fragments
are difficult to run. The purpose of obfuscation is to hide the original relationship between the code, or make the restored code
unable to run. A typical method of layout obfuscation is to modify the name of the declaration. In this way, all kinds of
declarations in the original code are modified. On one hand, the program that can be restored through clues loses the clue of
name; on the other hand, the program that wants to restore must restore these changed declarations again, because in static
code, different names will point to different memory spaces when the program is running. The original code can be restored
as long as it is placed in a specific location. Now, after layout confusion, the attacker’s algorithm of restoring based on clues
has lost its effectiveness, and the attacker needs to restore the declared name constantly so that the restored code can
successfully form a whole.
For example, the orthogonal list graph contains two algorithms: DFS (depth first search) and BFS (breadth first search).
Suppose that the two functions are divided into different task packages without any modification. These two task packages
contain olgraph and both use olgraph. From these clues, the attacker can infer that the two functions belong to different
functions defined by the same package. Now let’s modify the name of the related declaration in one of the groups. For example,
change the olgraph name in BFS to tree (Figure 9). It feels like a tree structure search. If the attacker does not spend time
reading the code carefully, but only relies on the name for analysis, he will find that it is difficult to find such clues only
through the declared name. More importantly, in the same running program, olgraph and tree will be divided into two different
memory instead of representing the same variable. 284 Run Luo, Song Huang, Hao Chen, and MingYu Chen
Expansion: the idea of expansion is mainly based on the basic principle that the more points in the graph, the more
situations in the graph. By introducing more content into the task, the cost of restoration is greatly increased (Figure 10). For
example: add an extra code prepared in advance in the original project, or directly copy some task packages to modify their
names, and then put them into the test environment. Although this method increases the cost of code restoration, it also requires
crowdsourcing testers to test more programs, which increases the cost of crowdsourcing software testing. Since the extension
itself does not modify the original code, the attacker can still restore the original project.
Figure 9. Layout confusion
Figure 10. Expansion
Misleading: misleading is an enhancement of the above two methods in crowdsourcing software testing. The idea of this
kind of method is to guide the attacker to restore another target program. The main goal of this kind of method is to change
the result that can be restored into another misleading result (Figure 11).
Misleading can effectively make up for the situation that the program can’t run because of modifying the name. If the
attacker can’t restore any executable program fragments, then the attacker will suspect that these code fragments have been
modified. Therefore, by misleading, the defender guides the attacker to restore a misleading program to replace the real
protected program, so as to deceive the attacker.
Figure 11. Misleading leads to new results
Data obfuscation: Data obfuscation is a kind of controversial code obfuscation in crowdsourcing software testing. Data
obfuscation is mainly used to resist attackers using code similarity detection technology. Such techniques can match
statements with similar structures in code. Simple name modification can not resist this kind of attack against statement
structure. The main reason for this dispute is that the data structure in the source code has been modified, which may lead to
some test cases obtained in crowdsourcing mode that cannot be reused in the original program. Code Confusion in White Box Crowdsourced Software Testing 285
5. Experiment This chapter aims to verify whether the above confusion idea can play a protective role in the white box crowdsourcing
software testing environment. We divide and obfuscate a go language program with 100 declarations. This program is the
complete version of the brute force program above. The program contains 93 function declarations, 7 general declarations, 65
test cases, and 37 task packages that are divided according to their dependencies (Table 3). Verification of its resistance to
clue search and brute force cracking are used to verify the effectiveness of our confusion ideas. In order to analyze the
relationship between the declarations in the stored program, the experiment transforms the relationship between the
declarations into a directed acyclic graph for storage. Finally, in order to facilitate the experimental observation, we use
graphviz to convert these directed acyclic graphs into dot language and generate pictures (Figure 12). Table 3. Pragram Overview
Package Function General Declation Test Cases
api 9 0 6
bucket 47 1 38
decl 9 0 2
dirgraph 3 1 2
olGraph 18 3 14
tree 7 2 3
Figure 12. Declaration relation generated by dot language
In order to verify the relevant methods of code obfuscation in the previous chapter, we designed three groups of controlled
experiments. They are the impact of layout confusion or misleading on the results of restoration, the impact of proliferation
on the efficiency of restoration, and the impact of data confusion and layout confusion on the test results. The main goal of layout confusion and misleading is to deal with the impact of restore results. The former makes the
restore results unable to unify variables or functions in static files and makes the runtime memory unable to unify. The latter
is mainly in the technology of layout confusion, putting some prepared misleading code into the task package to guide the
attacker to restore the new project. We select 30 task packages from 37 task packages, modify 90 of them, and then randomly
rename the packages in which these declarations are located. We ensure that the programs under the same task package can
run. For example, we change the tree package and function createarctree to star package start function, and the number of
declarations is still 100, but the number of packages is increased to 37. Before conversion, 56 clues can be found by using
clue search analysis. After conversion, all these clues are hidden by renaming, which eventually makes the clue search
algorithm invalid. Proliferation increases the cost of related cracking algorithms by providing additional testing tasks. As shown in Figure
13, the total number of declarations is increased from 90 to 200 by adding artificially designed irrelevant declarations to the
existing six go packages. At this time, we use the clue search algorithm to search for clues, and the search results add 50 extra
clues that we added in advance. At the same time, in order to verify the impact of statement growth on brute force cracking
algorithm. We run in a 4-core 16g memory windows 10 environment for many times, and the result of running time is shown
in Table 4.
Table 4. Impact of the increasing number of declaration on brute force cracking algorithms
Declaration Number The number of directed acyclic graphs Run time (s)
1 1 0.06
2 4 0.079
3 35 0.224
4 715 3.499
5 35382 193.588
6 3781503 6451.76 286 Run Luo, Song Huang, Hao Chen, and MingYu Chen
Figure 13. Expansion of proliferation on cues
Finally, in order to test the impact of code obfuscation on test results, we reuse the existing 65 test cases for layout
obfuscation and data obfuscation, of which 65 functions can be used for layout obfuscation, and the functions that have been
used for layout obfuscation (Here is mainly name obfuscation) can be reused by making corresponding name changes to the
test cases. There are 57 functions that can be used for data obfuscation. In the experiment, we use two data obfuscation
methods: one is incremental method, which mainly inserts redundant variables into the original data structure so that the data
structure needs additional memory space. The other is the modification method, which mainly modifies the similarity of
existing variable types, such as transferring integer to floating-point type, replacing variable with pointer variable, etc. When
we migrate the existing test cases to these two situations, most of them are not available. Redundant variables often cause the
problem of initializing spatiotemporal pointers, while the type change caused by modification requires comprehensive
modification for test cases. The final results of the three confusions are shown in the Figure 14. Generally speaking, data
obfuscation is more intensive than layout obfuscation, and the greater the intensity of obfuscation, the greater the impact on
reuse before and after test cases.
Figure 14. Influence of layout confusion and data confusion on test results 6. Conclusion
Presently, the main reason hindering the development of white box crowdsourcing software testing is the leakage of source
code. In order to reduce the risk brought by the overall leakage of the project, the employer often uses the method of code
segmentation to reduce the amount of code each worker gets as much as possible. These test tasks can be directly restored to
the original program without special treatment, and the difficulty of cracking comes from two sources: one is to collect these
program fragments from the crowdsourcing environment, and the other is the difficulty of restoring the program itself, which
is mainly determined by its complexity. In this paper, code obfuscation is taken as a special protection scheme. By increasing
the complexity of the program, the cost of restoring is increased, or the attacker is directly misled to restore the wrong program.
The influence of different confusion methods on the protection effect is shown in Table 5.
Table 5. Influence of different confusion methods on testing
Confusion scheme Impact on reduction results Impact on testing
Layout confusion Affect the result of reduction The test cases are easy to reuse and hardly increase the
labor cost.
Expansion Increase restore overhead Test workers need to design more test cases to increase
labor costs.
MisLeading Increase restore overhea &
Affect the result of reduction
Test workers need to design more test cases to increase Data confusion Affect the result of reduction Test cases are difficult to reuse. Code Confusion in White Box Crowdsourced Software Testing 287
We think that it is a long-term process to screen the integrity of testers and build a trusted environment for a crowd testing
platform. Code obfuscation is protected from the code itself, which can provide a good protection scheme in the development
stage of white box crowdsourcing software testing. However, code obfuscation does not guarantee that the program cannot
be restored in the end, and the scheme provided above can only resist the corresponding attack mode. At the same time, code
obfuscation modification often has an impact on the results of crowdsourcing software testing, and the higher the intensity of
obfuscation, the more profound the impact [23]. There is no large-scale research on the relationship between crowdsourcing
test cases and code obfuscation. Even if the definition of code obfuscation requires the program before and after obfuscation
to be consistent in function, there is no relevant theoretical research to prove that the test case of a program can be completely
equivalent to the program after code obfuscation. At the same time, the increase of crowdsourcing cost caused by confusion
is also a problem [12]. It is difficult to measure where the balance point between efficiency and safety for crowdsourcing
benefits is [23].
In the next step, we will further study the possible security risks in crowdsourcing, attack methods against these security
risks and related solutions. We hope that white box crowdsourcing software testing will develop in a more efficient and secure
direction, and create a new software engineering model for open collaboration on a larger scale by continuously integrating
test resources on networks [24].
Acknowledgments
This project is supported by the National Key R\&D Program of China (No.2018YFB1403400), Natural Science Foundation
of China (No.61702544), Natural Science Foundation of Jiangsu Province, China (No.BK20160769, BK20141072), and
China Postdoctoral Science Foundation (No.2016M603031).
References 1. Wang, P., Varvello, M. and Kuzmanovic, A. Kaleidoscope: A crowdsourcing testing tool for web quality of experience. In 2019
IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pp. 1971-1982, 2019.
2. Almeida, M., Bilal, M., Finamore, A., Leontiadis, I., Grunenberger, Y., Varvello, M. and Blackburn, J. Chimp: Crowdsourcing
human inputs for mobile phones. In Proceedings of the 2018 World Wide Web Conference, pp. 45-54, 2018.
3. Lin, J., Amini, S., Hong, J.I., Sadeh, N., Lindqvist, J. and Zhang, J. Expectation and purpose: understanding users’ mental
models of mobile app privacy through crowdsourcing. In Proceedings of the 2012 ACM conference on ubiquitous computing,
pp. 501-510, 2012.
4. Gardlo, B., Egger, S., Seufert, M. and Schatz, R. Crowdsourcing 2.0: Enhancing execution speed and reliability of web-based
QoE testing. In 2014 IEEE International Conference on Communications (ICC), pp. 1070-1075, 2014.
5. Leicht, N., Knop, N., Blohm, I., Müller-Bloch, C. and Leimeister, J.M. When Is Crowdsourcing Advantageous? The Case of
Crowdsourced Software Testing. In Proceedings of 2016 European Conference On Information Systems (ECIS), 2016.
6. Zhang, X., Feng, Y., Liu, D., Chen, Z. and Xu, B. Research progress of crowdsourced software testing. Journal of Software,
29(1), pp.69-88, 2018.
7. Gao, R., Wang, Y., Feng, Y., Chen, Z. and Wong, W.E. Successes, challenges, and rethinking–an industrial investigation on
crowdsourced mobile application testing. Empirical Software Engineering, 24(2), pp.537-561, 2019.
8. Gadiraju, U., Demartini, G., Kawase, R. and Dietze, S. Crowd anatomy beyond the good and bad: Behavioral traces for crowd
worker modeling and pre-selection. Computer Supported Cooperative Work (CSCW), 28(5), pp.815-841, 2019.
9. Van Pelt, C.R., Cox, R., Sorokin, A. and Juster, M., CrowdFlower Inc. Predicting future performance of multiple workers on
crowdsourcing tasks and selecting repeated crowdsourcing workers. U.S. Patent 8,626,545, 2014.
10. Khan, M.E. and Khan, F., 2012. A comparative study of white box, black box and grey box testing techniques. International
Journal of Advanced Computer Science & Applications, 3(6), pp. 1-12, 2012.
11. Dolstra, E., Vliegendhart, R. and Pouwelse, J. Crowdsourcing gui tests. In 2013 IEEE Sixth International Conference on
Software Testing, Verification and Validation, pp. 332-341, 2013.
12. Alyahya, S. and Alrugebh, D. Process improvements for crowdsourced software testing. International Journal of Advanced
Computer Science and Applications, 2017.
13. Zogaj, S., Bretschneider, U. and Leimeister, J.M. Managing crowdsourced software testing: a case study based insight on the
challenges of a crowdsourcing intermediary. Journal of Business Economics, 84(3), pp.375-405, 2014.
14. Collberg, C.S. and Thomborson, C. Watermarking, tamper-proofing, and obfuscation-tools for software protection. IEEE
Transactions on software engineering, 28(8), pp.735-746, 2002.
15. Nagra, J. and Collberg, C. Surreptitious software: obfuscation, watermarking, and tamperproofing for software protection:
obfuscation, watermarking, and tamperproofing for software protection. Reversing Secrets of Reverse Engineering, September,
2010.
16. Donovan, A. A. A. and Kernighan, B. W. The go programming language. China Machine Press, 2016.
17. Ali, K. and Lhoták, O. Application-Only Call Graph Construction. In Proceedings of 26th European Conference on Object-
Oriented Programming, pp. 688-712, 2012.
18. Cai, H. and Santelices, R. Abstracting program dependencies using the method dependence graph. In 2015 IEEE International
Conference on Software Quality, Reliability and Security, pp. 49-58, 2015. 288 Run Luo, Song Huang, Hao Chen, and MingYu Chen
19. Lulu, W., Bixin, L. and Xianglong, K. Type slicing: An accurate object oriented slicing based on sub-statement level
dependence graph. Information and Software Technology, 127, p.106369, 2020.
20. Isazadeh, A., Izadkhah, H. and Elgedawy, I. Source code modularization: theory and techniques. Springer, 2017.
21. Robinson, R.W. Counting unlabeled acyclic digraphs. In Combinatorial mathematics V, pp. 28-43, 1977.
22. Collberg, C., Nagra, J. and Wang, F.Y. September. Surreptitious software: Models from biology and history. In International
Conference on Mathematical Methods, Models, and Architectures for Computer Network Security, pp. 1-21, 2007,
23. Barak, B. Hopes, fears, and software obfuscation. Communications of the ACM, 59(3), pp.88-96, 2016.
24. Gao, J., Bai, X. and Tsai, W.T. Cloud testing-issues, challenges, needs and practice. Software Engineering: An International
Journal, 1(1), pp.9-23, 2011.
Copyright of International Journal of Performability Engineering is the property of Totem Vol.:(0123456789)
1 3
Journal of Busin e ss Ethics (2020) 167:433–450 R E V I E W PA P E R
Biometric Technolo gy and Ethics: Beyond Security Application s
Andrea North‑Samardzic1
Received: 14 July 2018 / Accepted: 4 March 2019 / Published online: 8 March 2019 Abstrac t
Biometric technology was once the purview of security, with face recognition and fingerprint scans used for identification Keywords Biometric technology · Ethics · Privac y
Introductio n
Biometric technology is widely used by a variety of organi- not limited to a particular market or industry” (Schuelke- Unlike other technological innovations, biometrics leads This article reviews the nascent literature on biometrics An earlier version of this paper was accepted for the 77th * Andrea North-Samardzic 1 Department of Management, Deakin Business School, http://crossmark.crossref.org/dialog/?doi=10.1007/s10551-019-04143-6&domain=pdf 434 A. North-Samardzic
1 3 regulatory frameworks alone are insufficient to ensure ethical This article provides an overview of the nature of bio- Biometrics: An Overview of Application Biometric technology concerns the use of the physiological 1. universality: each person should possess the character- 2. collectability: the characteristic can quantitatively meas- 3. distinctiveness: the characteristic should be different 4. permanence: the characteristic should be invariant over The system must also be capable of accuracy and effi- Biometrics can include medico-chemical technology and incorporate health and medical data into their functions There are several notable shifts in biometric technol- A 2003 literature review of the applications of FRT (Zhao The diffusion of biometric technology has created new This research shows that the use of biometrics has sig- 435 1 3 new affordances comes the potential for new or different Technology and Business Ethi cs
Before addressing biometrics and ethics in applied business The ethics of technology and business have been well To advance this inquiry, an important step is to recognize However, Martin and Freeman (2004) argue that a socio- Table 1 Comparison of first- and second-generation biometrics
First generation Second generation
Purpose Who are you? How are you? enforcement or consumer device identity Voice recognition to understand individual affect and face recognition 436 A. North-Samardzic
1 3 organizations. They extend this view to account for business Whether the literature on biometrics and business eth- Literature Review
Biometric technology has been in existence for over six Thirty articles were returned from Business Source Pre- multidisciplinary database, 11 medical research articles were It was concerning that the articles that were returned did The search of ScienceDirect resulted in 2870 articles that As observed in Table 2, there are commonalities between 1 This review was updated in October 2018. 437Biometric Technology and Ethics: Beyond Security Applications
1 3 Ta e S m
m y a ic s b m ri , o g
an
at ns nd us es
s
et cs A ho Ty To c io et cs s cu te no og B m ri te no gy ad es d th al eo es
on p- al am or in ud E ic is es is A p
lie or ni t io l
co
nt t B l ( 05 C ce ua O an na ur il- nc Y
Fi t
a s on ge ra t io Su ei nc th ry O na he y, So ol ic T or
Fe
m is he y, os st ct al m nc c tr , to m p va , sc m G er C co n d os ch (2 6) on pt l ar ho id tit m ag en Y t g er io N e rv lla e d
pr
iv y on m p du s D on 00 C L ra a he ic io e iv y, on en l- , f r i or at n in pl L ra es E ns t a (2 7) on l th te s pl ee nd w ra e ch lo Y t a s tio N e at ow rs p d i- cy at co de ia ity th te el re
Sp ts rg iz io L ge 00 C C la ra e ju ic Y e om te y, cc nt ab ty ig s
th
e
in
di du , d a lia lit in r m co ns t, us ri , iv er M sc (2 3) on l es rc go rn ce o rs nd ec d ne – n he d bl eff t, d st ar he de to gy on – en lis Ju in nt ro rt na ity al ce ee m se ri , d l u r k/ ch a es en pr ac Te no gy ev op en or tio M ge a W sh
(2
01 C in rm io sy em N Fi H er as is ur
et
hi a th ry f m un at e tio U ve al at n, le tim y, ffi cy eq ty ai es
ju
st e, od o th r I pr tit ne ,
is
su o ri cy nd se , c ba g
cy
be ri e, te c- al ro rt di ut , ee nd pe so w e, ha in a th di ta di de s fo o so al xc si G O he y a l ea d a or e es rs n on D a cu ty um
ri
gh a se an eu ni , r k, us 438 A. North-Samardzic
1 3 (c tin d)
A ad tio l nt Pa a S ri l on s d ea bl gy es a rv pr ac a i- us pe on fr do C su er ro ct R ak rs t a (2 8) on l th d iti tio ut om s u- ty ni , st e, nd ce f w G Sh nd u 01 C et p h lth ro Su an V Sa t 01 C Id tit ri ts ov ty an de lo en Se s pa lit de lo m t a s ia ho e se ch ar ig Id ts nc – ve ro th ov al vi io fi nc l cl io pe y gh , n es ty nt – en rs Tr ch a A sc gh (2 6 ) m ri l io cs re il es Pe on s et d ty de ity ef pr on ol su ei nc d cr i- tio d um R ai U an t a 5) on l eu m ke g es n ab m ’ d co se et nd eo o co m ic iv ac n D ni , i eg ty be fic ce on m efi nc on y, in rm c se ,
vu
ln ab g up M ke W te 20 ) a o is nb m c te ua in gr Pr ac s ve an , ck f t ns r- cy is im at au no y d en y, pu ic pa s d e bl g d, al ce of ow C er xp ie e 439Biometric Technology and Ethics: Beyond Security Applications
1 3 2017), retail (Trocchia and Ainscough 2006), and marketing As evidenced in Table 2, less than half the articles used The themes identified from the articles in Table 2 are Conceptualizations of Privacy
As observed in Table 2, privacy was universally referred frameworks to protect individual privacy. The focus on pri- Different authors make different distinctions between The literature on biometrics and ethics positions pri- Technology has changed and now possesses the capacity • privacy of the person, which encompasses the right to • privacy of behavior and action, which concerns activities 440 A. North-Samardzic
1 3 • privacy of personal communication, which aims to avoid • privacy of data and images, which includes protecting • privacy of thoughts and feelings, which understands that • privacy of location and space, which argues that indi- • privacy of association (including group privacy), which Finn et al. (2013) acknowledge that there may appear to The main privacy concern with biometrics is informa- The cumulative effect of different forms of biometric unprecedented convergence of technological develop- Privacy Paradox
The literature also recognizes that people’s relationships The “privacy paradox” concerns people’s relationships 441Biometric Technology and Ethics: Beyond Security Applications
1 3 Informed Consent
A key factor that influences the ethical magnitude of bio- Informed consent is a bigger issue for second-generation As recognized by O’Doherty et al. (2016), even if There is also the challenge of whether data subjects are scholars argue for education and training for data subjects Regulatory Frameworks and Guidelines
Most authors identified in Table 2 argue that privacy and Although these articles elicited several legal frameworks In addition to the articles identified in Table 2, the litera- 442 A. North-Samardzic
1 3 rigorous data governance can mediate many of the risks and In the EU, it is established that all biometric data are It is emphasized in the literature that legislation does not health care and financial services. However, in the future, it Discrimination
Many of the articles in Table 2 address the issue of discrimi- The consequences of categorization may be profiling with For Van der Ploeg (2003), it is not the technology that is 443Biometric Technology and Ethics: Beyond Security Applications
1 3 for greater appreciation of the “deeply politicized” (Norval What is discussed less but is important to a discussion of The State of Biometrics and Business Ethics This review of the literature indicates that, although there ethical issues of business and technology. This perpetu- An explicit recognition of the accountability of organiza- All biometric technology is governed by algorithms. In the context of the workplace, it can be stated that the 444 A. North-Samardzic
1 3 community must heed these trends so that the research com- Using the technology and business ethics literature, some Malsch (2013), drawing from consequentialist ethics and This example demonstrates that issues concerning biom- of the more significant problems is not only that biomet- Future Research
Theoretical Perspectives
There are opportunities to extend the theoretical analyses Attitudes toward privacy, and relationships with organiza- Martin (2016) and Norval and Prasopoulou (2017) use 445Biometric Technology and Ethics: Beyond Security Applications
1 3 world. As an illustration, Martin (2016) argues that stake- Ball’s (2005) unified approach to organizational surveil- Recent work has extended Ball’s (2005) theorizing on Beck’s notion of the “risk society” as a useful theoretical There is also an opportunity to draw from the theoreti- Empirical Advancement and Practical Implications
There is limited empirical research on biometrics and eth- 446 A. North-Samardzic
1 3 provide rich evaluative data on customers, translating into a Attention to context also means attention to national cul- Given the need for empirical data, such contributions can Based on the themes identified in the review of the arti- As stated, although informational privacy dominates the The privacy paradox is often cited in conceptual arti- bl 3 m l a an im ic fo ra ic ac th e tu re ar Im ic C eg ie of cy xa in io of e ff en efi tio cy ey d fo na ri va ) a th va ab e ic a os B ad ov ag of cy pe in ro ss na ui lin , o an ol ie co s ic n ju in l p va pa do D a s ke ld s’ ie s v io b gi . H th do is ra ic . H p er flu ce th Po ie tic s si e th do Q st n su pt ns f ns te y p e ec tio , p tic ar in la n a in vi al in n sc su In rm n ur of nf m .’ ve ig io o on xt in hi in is d n . P er is ri s be ed c in re tio hi a pr op tio lit In s ul be ta ar P vi ng ch lo ca xp na ns la te s r t g p lic tte io to ow pa tie sh ld e fle ve pr tic E ur av ue fo ith aw o nt nd at de tio R ul or en ro en na es ns to gi at e an s. th ri ip s de in ng bo p iti a n at e lf eg at n od s m tin io l e ir m t, lf ac es ri cy offi rs nd th al pa a en D cr O oi a ur y st g d pl at n u im ct n
de
ci on ak g o ns r e pl ee as el s st er A es en f u o io cs nt t a p po p nc le ow oe it fo de g ou e t d is ns nd it se al gs e he m ho th p m e ui bl de si ce es nd ut m ? w en s in ac ra es he nt th 447Biometric Technology and Ethics: Beyond Security Applications
1 3 toward privacy are contextual and informed by relation- Several themes about informed consent emerged. Future Legislation and regulatory frameworks standardize and by technological innovations, rendering them insufficient, As highlighted in the previous section, the final theme Conclusion
Biometrics has become part of the landscape of business and One limitation of this article is that there are relatively Many of these were identified as areas for future research, 448 A. North-Samardzic
1 3 without informed consent, even a generalized understanding As the law is not guaranteed to ensure that privacy is Acknowledgements Feedback from the anonymous reviewers Funding Funding for this research was provided by the Deakin Busi- Compliance with Ethical Standards
Conflict of interest I declare that I have no conflict of interest.
References
Albrecht, J. P. (2016). How the GDPR will change the world. European Alterman, A. (2003). “A piece of yourself”: Ethical issues in biom- Ball, K. (2005). Organization, surveillance and the body: Towards a Bhattacharyya, D., Ranjan, R., Alisherov, F., & Choi, M. (2009). Bio- Breland, A. (2017). How white engineers built racist code—and why Brown, W. S. (1996). Technology, workplace privacy and personhood. Brusoni, S., & Vaccaro, A. (2017). Ethics, technology and organiza- Buchholz, R. A., & Rosenthal, S. B. (2002). Technology and busi- Campisi, P. (2013). Security and privacy in biometrics (Vol. 24). New Carpenter, D., McLeod, A., Hicks, C., & Maasberg, M. (2018). Privacy Cavoukian, A., Chibba, M., & Stoianov, A. (2012). Advances in Corcoran, P., & Costache, C. (2016). Smartphones, biometrics, and a Custers, B., Dechesne, F., Sears, A. M., Tani, T., & van der Hof, S. D’Mello, S. K., Craig, S. D., & Graesser, A. C. (2009). Multimethod D’Mello, S. K., & Graesser, A. (2010). Multimodal semi-automated Davis, J., & Nathan, L. P. (2015). Value sensitive design: Applications, de Vries, R. E., & van Gelder, J.-L. (2015). Explaining workplace Dhanani, L. Y., Beus, J. M., & Joseph, D. L. (2018). Workplace dis- Dierksmeier, C., & Seele, P. (2018). Cryptocurrencies and business Dixon, P. (2008). Ethical issues implicit in library authentication and Economist, T. (2017). Advances in AI are used to spot signs of sexual- European Commission. (2018a). What does the General Data Protec- European Commission. (2018b). What data can we process and under Evans, R., McNamee, M., & Owen, G. (2017). Ethics, nanobiosensors Eyers, J. (2017). Westpac testing AI to monitor staff and customers. Fairweather, N. B. (1999). Surveillance in employment: The case of Grafsgaard, J., Wiggins, J. B., Boyer, K. E., Wiebe, E. N., & Lester, J. Greenwood, M., & Van Buren, I. I. I., H. J (2010). Trust and stake- Gregersen, A., Langkjær, B., Heiselberg, L., & Wieland, J. L. (2017). https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police https://www.economist.com/news/science-and-technology/21728614-machines-read-faces-are-coming-advances-ai-are-used-spot-signs https://www.economist.com/news/science-and-technology/21728614-machines-read-faces-are-coming-advances-ai-are-used-spot-signs https://www.economist.com/news/science-and-technology/21728614-machines-read-faces-are-coming-advances-ai-are-used-spot-signs https://ec.europa.eu/info/law/law-topic/data-protection/reform/what-does-general-data-protection-regulation-gdpr-govern_en https://ec.europa.eu/info/law/law-topic/data-protection/reform/what-does-general-data-protection-regulation-gdpr-govern_en https://ec.europa.eu/info/law/law-topic/data-protection/reform/what-does-general-data-protection-regulation-gdpr-govern_en https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-business-and-organisations/principles-gdpr/what-data-can-we-process-and-under-which-conditions_en https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-business-and-organisations/principles-gdpr/what-data-can-we-process-and-under-which-conditions_en https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-business-and-organisations/principles-gdpr/what-data-can-we-process-and-under-which-conditions_en https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-business-and-organisations/principles-gdpr/what-data-can-we-process-and-under-which-conditions_en https://www.afr.com/technology/westpac-testing-ai-to-monitor-staff-and-customers-20171114-gzks7h https://www.afr.com/technology/westpac-testing-ai-to-monitor-staff-and-customers-20171114-gzks7h https://www.afr.com/technology/westpac-testing-ai-to-monitor-staff-and-customers-20171114-gzks7h 449Biometric Technology and Ethics: Beyond Security Applications
1 3 Gurdus, E. (2017). UnitedHealthcare and Fitbit to pay users up to Hassib, M., Schneegass, S., Eiglsperger, P., Henze, N., Schmidt, A., Henschke, A. (2017). Ethics in an age of surveillance: Personal infor- Herschel, R., & Miori, V. M. (2017). Ethics and big data. Technology Holland, P. J., Cooper, B., & Hecker, R. (2015). Electronic monitoring Hoofnagle, C. J., King, Jennifer, Li, Su, & Turow, J. (2010). How dif- Hou, R. (2017). Neoliberal governance or digitalized autocracy? The Introna, L., & Nissenbaum, H. (2010). Facial recognition technology a Jain, A. K., Hong, L., & Pankanti, S. (2000). Biometric identification. Jain, A. K., & Kumar, A. (2012). Biometric recognition: an overview. Jain, A. K., Ross, A. A., & Nandakumar, K. (2011). Introduction. In Karkazis, K., & Fishman, J. R. (2017). Tracking US professional ath- Kirchberg, V., & Tröndle, M. (2012). Experiencing exhibitions: A Kirchberg, V., & Tröndle, M. (2015). The museum experience: Map- Klare, B. F., Burge, M. J., Klontz, J. C., Bruegge, R. W. V., & Jain, A. Kokolakis, S. (2017). Privacy attitudes and privacy behaviour: A Li, Y., Kobsa, A., Knijnenburg, B. P., & Nguyen, M. C. (2017). Cross- Loch, K. D., Conger, S., & Oz, E. (1998). Ownership, privacy and Lodge, J. (2006). Ethical EU eJustice: Elusive or illusionary? Jour- Lodge, J. (2012). The dark side of the moon: Accountability, ethics Lupton, D. (2013). Quantifying the body: Monitoring and measur- Lupton, D. (2016a). The diverse domains of quantified selves: Self- Lupton, D. (2016b). Personal data practices in the age of lively data. Lupton, D. (2016c). You are your data: Self-tracking practices and Lupton, D. (2017). Feeling your data: Touch and making sense Malsch, I. (2013). The just war theory and the ethical governance Maltseva, K., & Lutz, C. (2018). A quantum of self: A study of Martella, C., Gedik, E., Cabrera-Quiros, L., Englebienne, G., & Martella, C., Miraglia, A., Frost, J., Cattani, M., & van Steen, M. Martin, A. J., Wellen, J. M., & Grimmer, M. R. (2016). An eye on Martin, K. (2016). Understanding privacy online: Development of a Martin, K. (2018). Ethical implications and accountability of algo- Martin, K., & Freeman, R. E. (2003). Some problems with employee Martin, K. E., & Freeman, R. E. (2004). The separation of technology McDaniel, B., D’Mello, S., King, B., Chipman, P., Tapp, K., & McStay, A. (2014). Privacy and philosophy: New media and affective McStay, A. (2016). Empathic media and advertising: Industry, policy, McStay, A. (2018). Emotional AI: The rise of empathic media. Bangor: Milligan, C. S. (1999). Facial recognition technology, video surveil- Miltgen, C. L., & Peyrat-Guillard, D. (2014). Cultural and generational Mingers, J., & Walsham, G. (2010). Toward ethical information sys- Moore, P., & Piwek, L. (2017). Regulating wellbeing in the brave new Murry, W. D., Wimbush, J. C., & Dalton, D. R. (2001). Genetic screen- https://www.cnbc.com/2017/01/05/unitedhealthcare-and-fitbit-to-pay-users-up-to-1500-to-use-devices.html https://www.cnbc.com/2017/01/05/unitedhealthcare-and-fitbit-to-pay-users-up-to-1500-to-use-devices.html https://www.cnbc.com/2017/01/05/unitedhealthcare-and-fitbit-to-pay-users-up-to-1500-to-use-devices.html https://doi.org/10.2139/ssrn.1589864 https://doi.org/10.2139/ssrn.1589864 450 A. North-Samardzic
1 3 Naker, S., & Greenbaum, D. (2017). Now you see me: Now you still Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the Nissenbaum, H. (2011). A contextual approach to privacy online. Dae- NIST (2018). National Institute of Standards and Technology Face Rec- Norval, A., & Prasopoulou, E. (2017). Public faces? A critical exploration Nunan, D., & Di Domenico, M. (2017). Big data: A normal accident Odoherty, K. C., Christofides, E., Yen, J., Bentzen, H. B., Burke, W., Park, Y. J. (2013). Digital literacy and privacy behavior online. Com- Park, Y. J., & Skoric, M. (2017). Personalized ad in your Google Glass? Pentina, I., Zhang, L., Bata, H., & Chen, Y. (2016). Exploring privacy Prabhakar, S., Pankanti, S., & Jain, A. K. (2003). Biometric recognition: Royakkers, L., Timmer, J., Kool, L., & Rinie van, E. (2018). Societal and Schuelke-Leech, B. A. (2018). A model for understanding the orders of Schumacher, G. (2012). Behavioural biometrics: Emerging trends and Sheehy, B. (2015). Defining CSR: Problems and solutions. Journal of Shi, X., & Wu, X. (2017). An overview of human genetic privacy. Annals Smith, G. J. D., San Roque, M., Westcott, H., & Marks, P. (2013). Sur- Soleymani, M., Asghari-Esfeden, S., Pantic, M., & Fu, Y. Continuous Sprokkereef, A., and de Hert, P. (2012). Biometrics, privacy and agency. Sud, M., & VanSandt, C. (2015). Identity rights: A structural void in Sutrop, M., & Laas-Mikko, K. (2012). From identity verification to Taddicken, M. (2014). The ‘privacy paradox’in the social web: The Trocchia, P. J., & Ainscough, T. L. (2006). Characterizing consumer Ulman, Y. I., Cakar, T., & Yildiz, G. (2015). Ethical issues in neuromar- Unar, J., Seng, W. C., & Abbasi, A. (2014). A review of biometric tech- Van der Ploeg, I. (2003). Biometrics and privacy: A note on the politics Wang, C., & Cesar, P. Do we react in the same manner?: Comparing GSR Wang, C., & Cesar, P. The play is a hit: But how can you tell? In Pro- Wang, C., Geelhoed, E. N., Stenton, P. P., & Cesar, P. Sensing a live Wang, C., Wong, J., Zhu, X., Röggla, T., Jansen, J., & Cesar, P. Quanti- Webb, A. M., Wang, C., Kerne, A., & Cesar, P. Distributed liveness: West, J. P., & Bowman, J. S. (2016). Electronic surveillance at work: Whitehill, J., Serpell, Z., Lin, Y.-C., Foster, A., & Movellan, J. R. (2014). Winter, J. S. (2014). Surveillance in ubiquitous network societies: Norma- Winter, S. J., Stylianou, A. C., & Giacalone, R. A. (2004). Individual Zarsky, T. Z. (2017). Incompatible: The GDPR in the age of big data. Zhao, W., Chellappa, R., Phillips, P. J., & Rosenfeld, A. (2003). Face Zwitter, A. (2014). Big data ethics. Big Data and Society, 1(2), 1–6. from the widespread use of electronic identification. Science and Finn, R. L., Wright, D., & Friedewald, M. (2013). Seven types of privacy. Jain, A. K., Ross, A., & Prabhakar, S. (2004). An introduction to biom- Publisher’s Note Springer Nature remains neutral with regard to https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt Journal of Business Ethics is a copyright of Springer, 2020. All Rights Reserved.
2. var result [ ][ ][ ]string
3. n:=len(str)
4. split(n,m) // Decomposing a positive integer n into m positive integers (recursion)
5. newstr:=wholeArrange(str) // The str is fully arranged to generate a newstr [] string
6. for _,i:=range newstr{
7. According to the M numbers, extract the declaration from the declaration list I and package it into a package
8. Put the generated new package into the package list }
9. return result
10. }
Stirling number composed of positive integer n and m with the increase of declaration number n. This method of using
𝑛(𝑛−1)
𝑛(𝑛+1)
]. A directed acyclic graph with N + 1 vertices can be regarded as a directed acyclic graph
represented by a trigeminal tree. The Table 2 shows the Directed Acyclic Graph Create Algorithm , and the trigeminal tree
𝑛(𝑛+1)
4. NewEdges=SearchTree() //Each edge relation of the tree is stored in the frontier series table
8. if nocyc(NewDAG){ // Judge whether there is a cycle
9. result=append(result,NewDAG)}
that can be decomposed by the program itself.
determine the position of the unknown point (declaration package) and the edge (dependency) in the graph, thus
the obfuscation transformation T for program P is correct (Figure 8). Correct confusion also means that if P is wrongly
too complex through code obfuscation, it will affect the test workers’ testing of the program, so the obfuscation
index in the new environment.
4) Calculation Cost: since the obfuscated code inserted in the program fragment may not be added to the final program,
of crowdsourcing software testing. Here, it mainly refers to the cost of code confusion compared with unambiguous
The test cases collected by the confused program in crowdsourcing environment can reuse the program before
labor costs.
Publisher, Inc. and its content may not be copied or emailed to multiple sites or posted to a
listserv without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
https://doi.org/10.1007/s10551-019-04143-6
© Springer Nature B.V. 2019
and law enforcement. This is no longer the case; biometrics is increasingly used for commercial and civil applications.
Due to the widespread diffusion of biometrics, it is important to address the ethical issues inherent to the development and
deployment of the technology. This article explores the burgeoning research on biometrics for non-security purposes and
the ethical implications for organizations. This will be achieved by reviewing the literature on biometrics and business ethics
and drawing from disciplines such as computer ethics to inform a more robust discussion of key themes. Although there are
many ethical concerns, privacy is the key issue, with associated themes. These include definitions of privacy, the privacy
paradox, informed consent, regulatory frameworks and guidelines, and discrimination. Despite the proliferation of biometric
technology, there is little empirical research on applied biometrics and business ethics. As such, there are several avenues
for research to improve understanding of the ethical implications of using this technology.
zations. Fingerprint scans and face recognition technology
(FRT) are commonly used to assist with surveillance and
border security. Recently, biometric technology has been
used for commercial and civil applications, such as Face-
book and iPhone, for identity management. With this evo-
lution in application, questions arise about the ethical use
of such technology within the broader field of technology
ethics. It is its own field, distinct from other technological
innovations such as artificial intelligence, three-dimensional
printing, cloud technology, data analytics, nanotechnolo-
gies, and robotics (Schuelke-Leech 2018). Like these tech-
nologies, biometrics is disruptive, as it has the capacity to
“restructure, reorganize, disrupt current social and institu-
tional norms and standards, operations, production, trends,
Leech 2018, p. 270).
to additional ethical concerns. Collecting biometric data
have been described as “giving up a piece of yourself”
(Alterman 2003), akin to extracting a biological sample
(Milligan 1999), making it “intrusive” (Sprokkereef and de
Hert 2012) and “invasive” (Jain and Kumar 2012) for data
subjects. With the advent of second-generation behavioral
biometrics, issues extend to covert data capture and lack of
transparency and consent (Sprokkereef and de Hert 2012).
This impinges on people’s right to control their identity
(Alterman 2003; Milligan 1999). This requires an examina-
tion and exploration of the ethical implications of the use of
biometrics in and by organizations.
in applied organizational and business contexts, extend-
ing the themes and debates by drawing from the broader
and more longstanding fields of technology and computer
ethics. Although the literature on biometrics and business
ethics is not substantial, it raises new and troubling ques-
tions that require debate and consideration from scholars to
inform ethical business practices. While legislation covers
many aspects of the ethical issues raised in the literature,
Academy of Management Meeting.
andreans@deakin.edu.au
Deakin University, 70 Elgar Road, Burwood, VIC 3125,
Australia
probity in the use of biometric technology in organizations.
metric technology and its applications. Attention is given
to its evolution, from first to second generation and affor-
dances. Next, the literature on biometric technology in
applied organizational contexts, specifically business ethics,
is reviewed. As most research does not consider ethical con-
cerns for organizations, the extant literature on technology
ethics informs a discussion of the themes that emerged from
the review of the research on biometrics and business ethics,
and ethics theories and frameworks. Building on this review,
this article identifies areas for theoretical development,
empirical advancement, and practical implications for the
ethical use of biometrics. Although there is limited research
on this topic, combined with broader research on biomet-
rics and applied ethics, there are significant issues worth the
attention of business ethics researchers and organizations.
and Purpose
and behavioral characteristics of individuals. Biometric data
are usually used for identity management or authentication
(Jain et al. 2000). Biometric technology uses people’s fea-
tures and characteristics to capture data such as fingerprints,
palm prints and geometry, hand vein patterns, finger knuckle
prints, face, ear shape, tongue print, iris, retina, sclera, voice,
keystroke dynamics, gait, signature (Unar et al. 2014), pulse
and DNA (Sutrop and Laas-Mikko 2012). These can include
static and moving images (Zhao et al. 2003). Jain et al.
(2004, p. 2) identified the four most important qualities of
biometric data:
istic
ured
between people
time.
ciency, acceptable to users, and non-susceptible to circum-
vention, such as hacking (Jain et al. 2004).
such as magnetic resonance imaging and electrocardiogram
machines (Unar et al. 2014). There is merit to recognizing
such technology, given that personal fitness devices, such as
Apple Watch and Fitbit, are considered biometric technology
(Karkazis and Fishman 2017). Medico-chemical devices
used in medicine are outside the scope of this review, as they
were designed for different purposes, have separate regula-
tory frameworks, and are not used for civil applications out-
side healthcare. Thus, the ethical implications are different.
ogy, from first to second generation. The latter has a greater
focus on behaviors, as opposed to individual identifiers.
Schumacher (2012) characterizes this shift as moving from
“who you are” to “how you are.” There has also been shifts
in purpose and application, from security to safety (Norval
and Prasopoulou 2017), specifically, civilian and private sec-
tor applications (Prabhakar et al. 2003).
et al. 2003) identified four main uses of the technology:
entertainment, smart cards, information security, and law
enforcement. These activities specifically include (but are
not limited to) border control, forensics, criminal identifica-
tion, access control, computer logins, e-commerce, welfare
disbursements, missing children identification, identification
cards, passports, user authentication on mobile devices, and
time and attendance monitoring systems (Bhattacharyya
et al. 2009; Unar et al. 2014). With the shift to second-gen-
eration biometrics, the technology is extending beyond iden-
tity management to group analysis, in which generalizations
about demographic categories can be made and behaviors
can be analyzed (Schumacher 2012). It has afforded the rise
of what McStay (2014, 2018) refers to as emotional surveil-
lance or “empathic media … technologies that track bodies
and react to emotions and intentions” (McStay 2016, p. 1).
These differences are summarized in Table 1.
affordances outside traditional security and identity manage-
ment. Biometrics has been used to assess student engage-
ment. D’Mello et al. (D’Mello and Graesser 2010; D’Mello
et al. 2009; McDaniel et al. 2007), among others (Whitehill
et al. 2014), used FRT to evaluate the responses of students
to classroom learning. This illustrated that facial movements
predict outcomes of engagement, frustration, and learning
(Grafsgaard et al. 2013). There is considerable research on
audience evaluation in the form of laboratory studies that
sought to gauge audience responses to arts, media, and enter-
tainment (Hassib et al. 2017; Kirchberg and Tröndle 2012,
2015; Martella et al. 2015, 2017; Soleymani et al. 2014;
Wang and Cesar 2014, 2017; Wang et al. 2014, 2016; Webb
et al. 2016). Market research has used “methods such as eye
tracking, measurements of brain activity through electroen-
cephalography (EEG), and measurements of psychophysio-
logical changes via electro-dermal activity” (Gregersen et al.
2017, p. 3). This is also known as galvanic skin response.
nificantly broadened beyond its initial applications. With
ethical concerns (Schumacher 2012). The abovementioned
studies are lab based. However, the question arises as to
what happens when first- and second-generation technology
is applied to organizations without ethical research guide-
lines. Given the widespread use of biometrics, the role of
organizations as developers and users requires scrutiny. How
business ethics addresses these concerns is worth examining.
and organizational settings, it is important to first address the
existing literature on technology and business ethics so that
the relevant research can be positioned in relation to existing
debates and themes. A study of biometrics and ethics would
be situated in the broader field of applied technology and
ethics in organization technology (Buchholz and Rosenthal
2002; Loch et al. 1998; Martin and Freeman 2004). It would
sit alongside research themes such as worker surveillance
(Brown 1996; Loch et al. 1998; Martin and Freeman 2003;
West and Bowman 2016), big data ethics (Herschel and
Miori 2017; Nunan and Di Domenico 2017; Zwitter 2014),
and the ethics of algorithms (Martin 2018). One of the main
questions raised about technology, ethics, and organiza-
tions by business ethicists is “who should be accountable
for the ethical implications of technologies? (Martin and
Freeman 2004)” There is consensus that the organizations
that deploy the technology should be accountable (Martin
and Freeman 2003; West and Bowman 2016). Martin (2018)
argues that developers of algorithms should be responsible
for constructing software with ethical principles in mind.
The nature of this accountability does not always align with
ethical concepts, such as privacy as dynamic in practice
(Brown 1996). In addition, the role of the software in the
decision being made (either small or large) and the implica-
tions of the decision on society (ranging from minimal to
pivotal, such as access to public goods) affect the nature of
the responsibility (Martin 2018).
established by scholars, which begs the question of whether
new ethical concerns arise when biometrics is used for non-
security applications. Johnson (2001 in Martin and Free-
man 2003) suggests that new technologies do not raise new
ethical issues, simply new behaviors. For example, worker
surveillance is common. Using biometric technology for this
purpose may not change the nature of existing ethical con-
cerns or create new ones. However, it is important to review
the literature to ascertain if the use of biometric technol-
ogy by businesses poses new or different ethical matters for
researchers and organizations. Biometric scholars, such as
Schumacher (2012), contend that it engenders new ethical
considerations.
one of the fundamental assumptions underpinning debates
about technology and business ethics; that is, the relation-
ship society has with technology. The traditional view con-
siders the relationship either socially or technologically
determined, representing two ends of a spectrum (Martin
and Freeman 2004, p. 354). For Martin and Freeman (2004),
this binary approach is limited, as people’s relationship with
technology is neither fully technologically deterministic
(i.e., people are controlled by technological artifacts) nor
socially determined (i.e., technology is neutral and socially
controlled). This approach perpetuates Martin and Free-
man’s (2004) separation thesis of business and ethics, in
which business is detached from ethics. Instead, they advo-
cate a socio-technical systems approach, in which people’s
relationship with technology is a natural social interaction
and cannot be appropriately captured by binary opposites;
people both shape and are shaped by technology (Martin and
Freeman 2004). As such, it is simplistic to cast technology
as either value-laden or morally neutral. In practice, people
have constant and dynamic interactions with technology and,
as such, ethics and technology, like ethics and business, are
intertwined (Martin and Freeman 2003).
technical systems approach alone is insufficient for a robust
understanding of the situated nature of technology within
Application Identity management and authentication Safety and behavioral assessment
Context Government and security Civil and private sector
Level of analysis Individual Groups
Primary ethical concern Privacy risks Discrimination power
Example Fingerprint or face recognition for law
management
to assess group demographic characteristics such as age, gender, and
race
ethics concerns. Martin and Freeman (2004) take a prag-
matic perspective and draw from their earlier work (Martin
and Freeman 2003) that proposes a framework for ethical
analysis. This is particularly useful for examining the ethi-
cal implications of technology within organizations, which
appreciates the situated and relational nature of technology
and business ethics. This includes an analysis of the tradi-
tional moral concepts of self, relationships with others, com-
munity, and property (Martin and Freeman 2003). Regarding
the concepts of self, relationships with others, and commu-
nity, they are surrounded by moral rights and duties such
as freedom, privacy, respect, and responsibility. Similarly,
property has associated concepts of responsibility, use and
ownership, and voluntary agreement.
ics engages with these themes and frameworks is of inter-
est to this article. Although this is not an in-depth review
of the scholarship on technology and business ethics, this
discussion provides an overview of themes and concerns to
facilitate a comparison with the review of the biometrics and
business ethics literature. The following section provides
a review of the research into, and ethical concerns about,
biometrics for non-security purposes.
decades (Royakkers et al. 2018), making for an expansive
body of literature. This creates a challenge for determin-
ing the research to be considered to develop a cohesive and
comprehensive—although not exhaustive—foundation for
biometrics and business ethics scholarship. Three databases
were searched (Business Source Complete, ProQuest Cen-
tral, and ScienceDirect) using the terms “biometric” and
“ethics.” Thousands of peer-reviewed journal articles were
returned. A cursory examination revealed that a substantial
number were irrelevant, as they were unrelated, consisted of
book reviews, or were from business publications. To refine
the fields, each database was considered separately.
mier, with 15 from scholarly journals. These were scanned
to determine if they discussed biometrics or ethics, or merely
used the terms as examples. This resulted in six articles that
addressed applied organizational, business or management
contexts, or non-security applications such as consumer
products, worker surveillance, or professional ethics. The
search of ProQuest Central resulted in 13,365 peer-reviewed
academic articles in scholarly journals. Given the signifi-
cant number of articles returned, the search was refined
to be limited to articles with biometrics and ethics in the
abstract; this elicited 26 articles. As ProQuest Central is a
returned and removed from the list, leaving 15 articles, with
a subsequent article removed due to relevancy. Two were
found in the search from Business Source Premier. A review
of these articles resulted in an additional three identified as
addressing biometrics and ethics in a business context.
not account for relevant technology and ethics journals such
as Surveillance and Society, Ethics and Information Tech-
nology, Science and Engineering Ethics, Journal of Infor-
mation, Communication and Ethics in Society, and Journal
of Business Ethics. An additional search of these journals
using the terms “biometric” and “ethics” returned 87 arti-
cles. To refine this search to those that were most relevant to
business ethics scholarship, they were scanned to discover
if they substantively addressed biometrics and ethics. If the
words “biometric” or “ethics” were used only once or twice
as illustrative examples of technology but were not actively
discussed, the article was removed from the list. Business
publications or book reviews were removed. This resulted in
a list of 63 articles. These articles were scanned to ascertain
if they discussed biometrics and ethics in an applied busi-
ness or organizational context. This resulted in an additional
six articles added to the list.
mentioned “biometric” and “ethics.” Similar to the ProQuest
Central database, this was refined by searching for articles
with these terms in the title or abstract. Only two were
returned, both of which were in public health. As such, none
of the search returns from this database were included. The
final list of 15 articles that address biometrics and ethics in
an organizational or business context is included in Table 2.1
These articles were reviewed to ascertain key elements such
as whether the article was empirical or conceptual, the topic
of the article, whether biometrics was the main technological
focus, if it encompassed first or second biometric technology
(see Table 1), the ethical theories included, the ethical issues
raised, and the organizational context in which biometrics
was applied.
the literature on applied biometrics and ethics in business.
Most research is conceptual, rather than empirical, which
means that evaluation of the applications of the technology
and the ethical implications is necessary. First-generation
biometrics for identity management is addressed, in addition
to second-generation behavioral biometrics. The context in
which the technology is applied is varied, with a number of
articles exploring the ethical issues associated with biomet-
rics related to customers, such as consumer products (Cor-
coran and Costache 2016; Park and Skoric 2017; Shi and Wu
bl
2
u
ar
of
rt
le
on
io
et
cs
r
iz
io
a
b
in
hi
ut
rs
pe
pi
B
m
ri
a
fo
s
ch
l-
y
io
et
c
ch
lo
dr
se
E
ic
th
ri
/c
ce
tu
fr
ew
ks
cl
ed
th
al
su
ra
ed
p
d
ga
za
na
ex
al
20
)
on
pt
l
rg
iz
at
io
l s
ve
la
e
es
rs
nd
ec
d
ne
–
n
rv
lla
e
eo
,
rg
an
iz
at
io
l T
or
ci
og
al
he
y,
in
t T
or
p
t-
ru
ur
is
Su
rv
ei
lla
e:
on
ol
au
no
y,
ri
cy
di
ri
in
at
io
n
en
al
or
ra
an
C
ta
e
01
C
ce
ua
Sm
tp
ne
en
y
an
em
t
es
Fi
rs
en
at
n
on
Su
ei
nc
an
ac
C
su
er
ro
ct
ix
(2
8)
on
ce
pt
ua
l
ib
ry
ut
nt
at
n
Y
es
Fi
rs
t g
en
er
at
io
n
N
on
Pr
ac
c
fid
tia
ity
ai
nf
m
io
pr
ci
es
ib
ri
va
e
l.
01
C
ce
pt
ua
A
le
a
em
oy
a
ea
bl
te
no
gy
es
Fi
rs
nd
ec
on
d
ge
ne
ra
–
n
on
D
a
ne
hi
an
pr
va
, d
a
nfi
nt
l-
, a
le
w
fa
or
o
an
at
ns
od
(2
6)
on
ce
pt
ua
l
ol
bo
tiv
e-
st
e
es
Fi
rs
t g
en
er
at
io
n
N
on
C
pe
nc
a
ou
–
ili
, r
ht
of
vi
al
at
re
bi
y,
fo
ed
en
tr
t,
sk
pr
ac
y
G
en
al
al
h
01
C
ce
pt
ua
R
ea
h
ve
an
N
Fi
t a
s
on
ge
ra
tio
T
or
y
of
ou
e
ec
an
Ju
W
T
or
y,
on
lo
, c
se
qu
tia
m
st
te
, p
po
io
l-
, b
an
fr
do
–
cu
ty
ua
se
is
te
no
lo
gy
ss
sm
t,
iv
y
ch
lo
d
el
m
t
ga
ni
za
ns
in
rs
nd
al
am
0)
on
ce
pt
ua
l
E
th
ic
al
fo
at
n
st
s
o
rs
t g
en
er
at
io
n
ab
m
’ d
co
se
cs
nd
eo
o
co
m
ic
iv
ac
n
ni
rs
iz
io
gi
ac
e
ca
,
ui
, f
rn
s,
ic
c
es
f e
ic
s
fo
S
ac
io
rs
es
f p
va
a
cu
ri
ty
om
tin
of
rc
m
in
lle
tu
p
pe
y
sp
es
fr
a
o
n
ft
ar
ck
g,
nd
e
gi
l
vi
a
a
rm
f
ci
e
lu
on
en
er
al
do
rt
et
l.
(2
01
6)
C
on
ce
pt
ua
H
lth
at
st
ag
Y
Fi
t a
nd
s
ec
on
d
ge
ne
ra
–
tio
N
e
at
se
ri
, h
an
ts
bu
s
d
ge
cs
is
tr
t,
pr
iv
ac
y
G
en
er
al
Ta
bl
e
2
on
ue
ut
ho
rs
Ty
pe
To
pi
c
B
io
m
et
ri
cs
a
s
fo
cu
s
te
ch
no
l-
og
y
B
io
m
et
ri
c
te
ch
no
lo
gy
dr
es
se
d
E
th
ic
al
th
eo
ri
es
/c
on
ce
p-
tu
al
fr
am
ew
or
ks
in
cl
ud
ed
E
th
ic
al
is
su
es
ra
is
ed
A
pp
lie
d
or
ga
ni
za
na
co
ex
t
rk
nd
ko
c
(2
01
7)
C
on
ce
pt
ua
C
su
m
er
p
ro
du
ct
an
w
ra
e
te
ch
no
lo
Y
Fi
rs
t a
nd
s
ec
on
d
ge
ne
ra
–
tio
n
N
on
e
D
at
su
ei
lla
nc
e,
iv
y,
nt
tr
t,
rs
al
ee
m
on
m
p
du
s
oy
ke
e
l.
01
C
ce
pt
ua
E
ic
s
of
ig
za
n
Y
es
Fi
rs
t a
nd
s
ec
on
d
ge
ne
ra
–
tio
n
N
on
e
Pr
iv
ac
y,
a
on
y,
ec
ri
, h
um
an
d
ig
ty
ju
ic
a
b
al
an
o
po
er
en
er
al
i a
W
(2
7)
on
ce
pt
ua
l
G
en
ic
ri
va
cy
Y
es
Fi
rs
t a
nd
s
ec
on
d
ge
ne
ra
–
tio
n
N
on
e
Pr
iv
ac
y
C
on
su
m
er
ea
p
du
ct
s
d
d
an
nd
(2
5)
on
ce
pt
ua
l
en
y
gh
, p
er
d
ve
pm
t
N
o
Fi
rs
t g
en
er
at
io
n
n’
ca
bi
y
ve
p-
en
nd
oc
l c
ic
re
ar
p
ad
m
en
tit
y
ri
gh
, i
lu
si
g
w
, p
er
ty
le
at
n,
na
ia
in
us
n,
p
ro
rt
ri
ts
ec
si
e
re
pr
eu
hi
p
G
en
er
al
oc
ia
nd
in
ou
00
E
pi
ca
B
m
et
ri
in
ta
Y
Fi
rs
t a
nd
s
ec
on
d
ge
ne
ra
–
tio
n
N
on
e
rs
al
af
y,
at
a
se
cu
ri
, i
nt
th
t,
iv
ac
y,
s
oc
ia
l c
tr
,
rv
lla
e,
is
im
na
n,
eh
an
iz
at
io
n
et
l
lm
e
l.
(2
01
C
ce
pt
ua
N
ro
ar
tin
Y
Fi
rs
t a
nd
s
ec
on
d
ge
ne
ra
–
tio
H
er
as
is
ur
hi
cs
a
th
ry
f
m
un
at
e
tio
ig
ty
nt
ri
,
ne
en
, n
–
al
ce
e,
a
ut
om
fo
ed
on
nt
pr
iv
ac
y,
c
on
fid
en
tia
lit
y,
er
le
ro
s
ar
tin
g
in
r (
14
E
m
pi
ri
ca
l
C
on
su
m
er
d
at
N
Fi
rs
t a
nd
s
ec
on
d
ge
ne
ra
–
tio
n
N
se
au
’s
on
xt
l
te
ity
iv
y,
ur
ill
ce
la
o
ra
pa
en
, d
cr
in
io
n,
to
m
an
id
tit
bl
s
ce
an
th
pu
ic
oo
b
an
p
er
on
su
m
e
er
nc
(Ulman et al. 2015). These authors go beyond addressing
biometrics for consumer identity management to address
how the technology can be used to extract behavioral infor-
mation. This issue was also explored in the context of per-
formance data from athletes as employees and the associated
ethical implications (Evans et al. 2017). Although biometrics
for authentication in libraries (Dixon 2008) is acknowledged,
the remaining articles discuss the ethical implications for
business and organizations broadly, rather than in relation to
specific applications or contexts. For example, Ball (2005)
unpacks the ethical concerns if biometrics is used for organi-
zational surveillance in general.
ethics theories or frameworks to inform their analyses, with
little convergence; only Habermas’ discursive approach was
mentioned more than once. The absence of a theoretical or
conceptual grounding in the literature is notable and will
be discussed later. Many similar ethical themes were raised
throughout the articles, indicating an opportunity for unify-
ing theories, concepts, and frameworks to be employed in
future research.
conceptualization of privacy, the privacy paradox, informed
consent, legal frameworks, and discrimination. However, 15
articles are insufficient to constitute a meaningful discussion.
Similar to the observation by Dierksmeir and Seele (2018)
about Bitcoin scholarship, important research on biometrics
and ethics is found in other disciplines that pertain to busi-
nesses and organizations. Such research was found in the
database searches on biometrics. As they did not explicitly
discuss business ethics and biometrics in applied organiza-
tional contexts, they were not included in Table 2 and the
previous discussion. However, they discuss these themes in
general terms and will be used to inform a more fulsome
discussion of the themes from the articles in Table 2. Their
criteria for inclusion were that the entry appeared as a result
in the previous database searches and the book or article dis-
cussed the ethical implications for biometrics for non-secu-
rity purposes, including research that considers biometrics
for security topics. Border security and human trafficking
would not be relevant. The themes were used as search terms
to identify supporting literature that may not have been cap-
tured by the original search. These themes, combining the
literature from the original search and the revised search,
will now be discussed.
to, suggesting its primacy as an ethical issue. It is worth
noting that, for Royakkers et al. (2018), privacy is more of
a legal than ethical issue, given the existence of regulatory
vacy is unsurprising, as Evans et al. (2017) argue that biom-
etric data are more sensitive than statistical data. Biometric
technology is also argued to be an invasion of privacy, as
it facilitates surveillance inside organizations toward their
workers and outside organizations toward their customers
and society (Ball 2005; Corcoran and Costache 2016; Roy-
akkers et al. 2018). This contravenes the right to remain
anonymous (Odoherty et al. 2016), which is especially the
case for biometric technology embedded in wearable devices
(Park and Skoric 2017).
types of privacy. Ball (2005) points to corporal privacy and
bodily integrity, and Shi and Wu (2017) refer to genetic pri-
vacy. Park and Skoric (2017) note the difference between
institutional privacy, which is governed by legislation for
data protection and social privacy, which is concerned
with social norms such as interaction patterns. Royakkers
et al. (2018) raise the concepts of spatial and mental pri-
vacy. Although these distinctions were acknowledged by the
authors, they were not explored in detail.
vacy as an issue of autonomy (Karkazis and Fishman 2017;
Sutrop and Laas-Mikko 2012) and the individual’s control
over how and when they are represented to others (Alterman
2003). Privacy has been conceptualized differently. The first
notable framework is from Clarke (1997 in Campisi 2013),
who distinguishes between four types of privacy regarding
information technology and individual rights: decisional pri-
vacy is the right of the individual to make decisions regard-
ing their life without undue interference, spatial privacy is
the right to personal physical space that cannot be violated
without explicit consent, intentional privacy is the right
to forbid or prevent further communication of observable
events (e.g., conversations held in public) or exposed fea-
tures (e.g., publishing photos), and informational privacy
refers to the right to limit access to personal information
that represents information that could be used to identify
an individual.
to gather more physical and behavioral data than before.
As such, people’s understandings of privacy have evolved.
Finn et al. (2013) updated the categories by Clarke (1997 in
Campisi 2013) to include seven categories of privacy. They
considered the framework by Clarke (1997 in Campisi 2013)
insufficient to address the concerns raised by new technolo-
gies. These are
keep bodily functions and characteristics (such as genetic
codes and biometrics) private
that happen in both public and private space
intercepting communications, including mail, the use of
bugs, directional microphones, telephone, or wireless
communication interception or recording, and access to
e-mail messages
an individual’s data from being automatically available
or accessible to others and ensuring that people can have
control of their own data
individuals possess the right to independent thought
viduals have the right to move in public or semi-public
space without being identified, tracked, or monitored
is concerned with people’s right to associate with whom-
ever they wish without being monitored (Finn et al. 2013,
pp. 8–9).
be overlaps in these categories; however, each represents
distinct forms of privacy that have emerged regarding peo-
ple’s changing relationships with technologies. For example,
privacy of location and space may seem similar to privacy
of behavior. However, privacy of location and space relates
to the right to move throughout space without tracking; pri-
vacy of behavior is concerned with the right to behave how a
person chooses without interference, as long as they are not
harming others (Finn et al. 2013). Privacy of association is
different to privacy of behavior, as the former is concerned
with the right to associate with any group (e.g., unions or
religion) and privacy of behavior is the right to behave
within these groups as a person sees fit (Finn et al. 2013).
tional privacy, as biometrics collects the personal data
(Cavoukian et al. 2012; Smith et al. 2013; Sutrop and Laas-
Mikko 2012; Van der Ploeg 2003) most closely aligned with
the idea by Finn et al. (2013) of privacy of data and image.
However, it can be argued that the evolution of second-gen-
eration biometrics and commercial and civil usage means
that other types of privacy are equally relevant (Ball 2005;
Royakkers et al. 2018). For example, FRT in public spaces
that can measure emotional responses has the capacity to
record interactions with social groups, impinging on the pri-
vacies of location and space, thoughts and feelings, behavior
and action, and association.
technology exponentially increases privacy concerns.
Alterman (2003) argues that the greater the representa-
tions that identify people, the more systems are linked
that house people’s data, resulting in greater loss of
privacy. In particular, second-generation biometrics is
argued to threaten privacy on a large scale as a result of
ments (Jain et al. 2011). Conversely, if the technology is
less concerned with unique identifiers and if systems are
not linked, this suggests a lesser violation of privacy. For
Corcoran and Costache (2016), the extent of privacy viola-
tions is contingent on who owns the data, what the propor-
tional benefit to the parties are, and whether informed con-
sent occurred. This illustrates that it is not the biometric
technology that influences the extent of privacy concerns,
but the nature of the application and the data collected.
with privacy change as they adopt new technologies (Odo-
herty et al. 2016; Park and Skoric 2017). For example, in
the study by Trocchia and Ainscough (2006), privacy was
not the main concern for people using biometric technol-
ogy as part of the customer experience. Convenience was
more pressing, followed by identity theft (Trocchia and
Ainscough 2006). This highlights the dynamic nature of
the “privacy paradox” noted by Corcoran and Costache
(2006), in which people seek to guard their privacy despite
sharing a substantial amount of information online.
with technology and privacy, specifically, freely exchang-
ing personal images and information while simultaneously
having increased concerns about privacy and security
(Taddicken 2014). The review by Kokolakis (2017) of the
“privacy paradox” demonstrates that attitudes toward pri-
vacy and associated behaviors are highly contextual and
vary according to the technology and circumstances. The
increasing use of social media and interconnected technol-
ogy means that people are exercising free will and choos-
ing to share more information about themselves, but does
not mean they are less concerned about privacy (Maltseva
and Lutz 2018; Naker and Greenbaum 2017). Research
highlights that younger generations are more concerned
about digital pr ivacy, despite being digital natives
(Hoofnagle et al. 2010). Although people have reasonable
expectations of privacy (Milligan 1999), as social systems
change and technologies evolve, so do expectations of pri-
vacy. For example, first-generation fingerprint biometrics
is used for identity management and security. Second-gen-
eration behavioral biometrics can analyze an individual’s
emotions—tantamount to mind reading (Sprokkereef and
de Hert 2012)—arguably a greater invasion of a person’s
privacy. This enables evaluation of people’s inner con-
ditions at scale, which is different to prior technologies
and applications that are restricted to assessing external
characteristics.
metric technology is whether data subjects have provided
informed consent. Several authors in the review noted the
importance of informed consent when deploying biomet-
ric technology (Corcoran and Costache 2016; Ulman et al.
2015), but do not assume it solves all ethical dilemmas
related to privacy (Odoherty et al. 2016). Informed consent
is an issue that varies with the application of the technol-
ogy. If an individual consents to their biometric data being
collected and used for legitimate use (e.g., for a research
project), then it could be argued that these concerns are not
equally significant (Alterman 2003). If individuals consent
to their data being collected, such as athletes and wearable
technology, then their sense of self is not as compromised
and privacy concerns are lessened; their privacy is not being
invaded. Arguably, it is willfully relinquished by the subject.
Even personal data that are considered protected by law,
such as race or sexual activity, can be collected if the subject
has voluntarily consented (Sprokkereef and de Hert 2012).
biometrics in which covert capture is possible. Subjects are
unaware that their information is being collected and so are
unable to provide informed consent (Norval and Prasopoulou
2017; Sutrop and Laas-Mikko 2012). This is often the case
in situations of security and surveillance (Jain and Kumar
2012). According to Sprokkereef and de Hert (2012, p. 82),
“the embedded systems, ambient intelligence, distant sens-
ing and passive biometrics involved require no conscious
cooperation from subjects and thus pose a challenge to the
traditional concepts used in the fields of data protection and
human rights.”
informed consent is obtained, it does not nullify ethical con-
cerns. For example, power disparities between parties make
consent challenging. In the case of professional athletes,
even if they consent to the collection of their biometric data
and extract benefits from participation, the power relation-
ships between athletes, their teams, and the league often
mean that not consenting could have significant repercus-
sions for their careers (Karkazis and Fishman 2017). Even
if the relationships have equal power, if the benefits of par-
ticipation are greater for one party than another, it would
potentially violate the principles of fairness (Introna and
Nissenbaum 2010). For Introna and Nissenbaum (2010),
there may always be a power imbalance that is not in favor
of the subject, as many individuals (perhaps none of whom
the data subjects are aware of) may have access to, and view,
their personal biometric information as part of the process
of data storage, analysis, and dissemination.
truly sufficiently informed to properly consent. Several
on topics such as digital literacy, safety, information secu-
rity, and privacy (Hoofnagle et al. 2010; Park 2013). Given
the lack of data subjects exercising their ex ante rights to
withdraw consent to biometric data collection (Introna and
Nissenbaum 2010; Sprokkereef and de Hert 2012), this may
be attributable to lack of awareness of their rights. The ques-
tion of whether data subjects are truly sufficiently informed
about biometrics to properly consent is worth exploring.
data protection legislation is a cornerstone of the appropri-
ate use of biometrics (Dixon 2008; Evans et al. 2017; Lodge
2006; Malsch 2013; Odoherty et al. 2016; Sud and VanSandt
2015), protecting important issues such as “function creep”
(Lodge 2006; Sud and VanSandt 2015) and establishing
clear guidelines around data ownership (Corcoran and Cos-
tache 2016; Evans et al. 2017; Royakkers et al. 2018), pro-
portionality (Corcoran and Costache 2016; Malsch 2013),
benefit (Corcoran and Costache 2016), access (Evans et al.
2017), transparency (Royakkers et al. 2018), and purpose
(Odoherty et al. 2016). The United States Fair Information
Practices and OECD guidelines are cited by Dixon (2008) as
key examples guiding the ethical use of biometrics and pro-
fessional codes of ethics guiding organizational actions. The
latter of which are noted by Mingers and Walsham (2010) as
important ethical touchstones.
and professional guidelines to regulate the appropriate use
of biometrics and data, the extreme variance that can occur
within and across national contexts was explicitly noted
(Lodge 2006; Winter 2014). Winter (2014) argues that the
privacy and data protection legislation in the European
Union (EU) outstrips virtually non-existent frameworks in
China. They state that the discourse around privacy ver-
sus security can vary substantially in a national context.
For example, the EU focuses on migration, overseeing the
asylum process, and visa fraud. In the US, the emphasis is
on surveillance to facilitate activities in the “war on terror”
(Lodge 2006). These separate purposes engender different
applications to collect and store data with corresponding dif-
ferent ethical considerations. In the EU, the emphasis is on
identity verification, which more closely relates to privacy
and data protection. In the US, ethical concerns are more
related to profiling and discrimination (Dixon 2008; Lodge
2006).
ture on data governance—which has existed since the advent
of biometric technology—addresses ethical issues such as
privacy by design (Norval and Prasopoulou 2017) and value
sensitive design (Davis and Nathan 2015). Appropriate and
concerns about biometrics, particularly around privacy. Data
governance specifically includes managing the availability,
usability, integrity, and security of the data, and subsequent
validity and interpretation (Karkazis and Fishman 2017).
For example, access to privileged information can be read-
ily tracked by secure and supportive information systems
to ensure accountability for activities (Jain et al. 2011). It
includes attention to the reliability and accuracy of the data,
who has access to it, and ensures appropriate training of the
individuals involved in the process of collection, analysis,
and dissemination (Karkazis and Fishman 2017). Data gov-
ernance is most effective when supported by privacy regula-
tions (Lodge 2012; Schumacher 2012).
classified as personal data under data protection legislation
(Karkazis and Fishman 2017; Sprokkereef and de Hert 2012;
Van der Ploeg 2003). Recently, the EU has updated the Gen-
eral Data Protection Regulation (GDPR), which came into
effect in May 2018, and “regulates the processing by an
individual, a company or an organization of personal data
relating to individuals in the EU” (European Commission
2018a), thereby including all biometric data. It stipulates
that all organizations must adhere to principles of data pro-
cessing, including collecting data in a transparent and lawful
manner, only collecting data for specific purposes, not trans-
ferring data from one purpose to another, storing data for no
longer than necessary, and ensuring all organizations install
technical safeguards to protect data (European Commission
2018b). Although these principles may already be in prac-
tice, the GDPR is a ‘profound’ (Zarsky 2017) piece of leg-
islation, as breaches have potentially severe consequences,
such as sanctions of up to four percent of an organization’s
annual global turnover, or up to €100 million (Albrecht
2016). To ensure compliance, approximately 75,000 pri-
vacy officers may be appointed across the globe (Custers
et al. 2018). For those outside the EU, the OECD Privacy
Guidelines are recommended for member states and cover
key ethical principles such as purpose specification, open-
ness, collection limitation, data quality, accountability, use
limitation, individual participation, and security safeguards
(Campisi 2013).
automatically lead to adherence (Evans et al. 2017; Lodge
2006; Naker and Greenbaum 2017; Winter 2014). The evo-
lution and diffusion of technology often outpaces the law
(Malsch 2013; Schumacher 2012) and allows room for func-
tion creep and data to be obtained outside the stated purpose
(Lodge 2006, 2012). For example, Park and Skoric (2017)
argue that there is limited legislation covering biometrics
used for data marketing. Naker and Greenbaum (2017)
acknowledge that, in the US, federal privacy law regulating
the commercial uses of FRT has lagged and only covers
is likely that there will be broader legislation. Even with the
seemingly robust GDPR, there are significant differences in
legal provisions and enforcement across nations. For exam-
ple, only France has a legal obligation to undertake privacy
impact assessments (Custers et al. 2018). Under US privacy
law, professional athletes do not own their biometric data. In
the EU and Canada, this would be unlawful (Karkazis and
Fishman 2017). Thus, operating in a multinational business
environment multiplies concerns (Lodge 2012). Further,
in cases in which there is an absence of legal frameworks
guarding privacy, such as China or some developing nations,
the question arises as to which ethical principles will under-
pin organizations seeking to self-regulate.
nation and profiling as a result of the biometric data gener-
ated (Ball 2005; Corcoran and Costache 2016; Lodge 2006;
Mingers and Walsham 2010; Sud and VanSandt 2015). Both
first- and second-generation biometrics can be used to demo-
graphically classify people based on age, ethnicity, gender,
and sexual orientation (Cavoukian et al. 2012; Sprokkereef
and de Hert 2012). The latter received significant public
backlash when news was released of a study that used FRT
to assess sexuality (The Economist 2017). These capabilities
and events raise the question: if organizations have the right
to discriminate against LGBTI + individuals, what would
prevent them from using technology to identify people from
this group and terminate employment or refuse service?
incomplete information and deindividualization, leading to
unjustified and, in liberal societies, unjustifiable discrimina-
tion and stigmatization (Ball 2005; Sutrop and Laas-Mikko
2012). For organizations, when these data are used for hir-
ing and firing decision-making, there are risks of exploita-
tion, coercion, and employee discrimination (Campisi 2013;
Naker and Greenbaum 2017). Given the history in the US
of organizations using genetic testing to inform employment
decisions (Murry et al. 2001) and health insurers using per-
sonal fitness data to influence pricing (Gurdus 2017), this is
a legitimate concern.
discriminatory, but the individuals using the data to inform
decision-making. This means acknowledging the agency of
the individuals involved in the development and deployment
of biometrics. Discussions about negative effects often adopt
a technologically deterministic view that deifies technology
(Van der Ploeg 2003). Martin and Freeman (2004) note this
as one end of a spectrum of the traditional view of technol-
ogy. They advocate for abandoning this view for a more
nuanced socio-technical systems approach. This would allow
and Prasopoulou 2017) nature of racial, gender, and sexual
identities, particularly as individuals have the right to control
the way they are presented to others and how their identi-
ties are projected, as subjective and self-defined (Alterman
2003). For example, it has been highlighted in the media that
FRT is “racist” and the technology was developed by white
coders who created algorithms that failed to account for
physical nuances in other races, due to own race bias (Bre-
land 2017), an argument that has empirical support (Klare
et al. 2012). Thus, to be non-discriminatory, the use of biom-
etrics capable of categorizing people should be used with an
appreciation and critical awareness of the political nature of
identity and the implications within. Given the widespread
ethical concerns of the issue (see Dhanani et al. 2018 for a
meta-analysis), biometrics could extend the technological
capabilities of workplace discrimination.
the discriminatory power of biometrics is social exclusion.
This issue was raised by Sud and VanSandt (2015) in their
research on a biometric identity card in India. One of the
main arguments advocating for this is that social inclusion
cannot occur without a legal identity, which the biometric
identity card provides. They do not address the way biom-
etric technology can be exclusionary, as not all individuals
may be able to fulfill the criteria for physical attributes iden-
tified by Jain et al. (2004). Wickins (2007) argues that people
with physical disabilities may not fit the criteria of physical
universality (e.g., they may not have the digits for fingerprint
identification or capacity for speech for voice recognition).
Wickins (2007) contends that the elderly, or people with
mental illness, may not be comfortable using biometrics,
which impinges on the criterion of “acceptability” that Jain
et al. (2004) consider a requirement for biometric systems.
As such, already marginalized individuals may be subject to
further social exclusion at the hands of biometrics.
Research
are overlapping themes in technology and business ethics,
there must be greater engagement with the existing busi-
ness ethics literature and stronger theoretical contributions.
A critical starting point is for researchers in biometrics and
business ethics to clarify their assumptions about technology
and society. Are they adopting a technologically or socially
deterministic view and why? A common assumption is that
technology is ethically neutral (e.g., Brusoni and Vaccaro
2017), which is problematic, given that it does not provide
a nuanced appreciation of people’s relationships with tech-
nology (Van der Ploeg 2003) and does not account for the
ates the separation thesis that plagues business and ethics
research (Martin and Freeman 2004). This issue must be
addressed for future research to effectively build on existing
traditions of business ethics literature.
tions was missing from the articles in Table 2, despite being
a fundamental concern of technology and business ethics
scholarship. It was implicit in debates about privacy and
informed consent (i.e., organizations should be held account-
able for ensuring informed consent from their subjects) but
lacked an active interrogation of the ethical roles, respon-
sibilities, and accountabilities of organizations. Martin’s
(2018) framework on the firm’s responsibility for algorithms
illustrates that the nature of the responsibility is not fixed but
determined by two factors: the role of the decision in society
and the role of the algorithm in the decision. This framework
is most relevant when applied to the organizations responsi-
ble for creating the algorithms. However, some observations
can be extended from this framework to explore the roles
and responsibilities organizations should consider when
deploying biometrics.
Therefore, it can be stated that the role of the algorithm
is significant and the role of the decision is less fixed.
This review argues that this depends on the purpose of the
data and, in the context of the application, the relation-
ship between the organization and the subject, that is, an
employee or consumer. This allows for a more full apprecia-
tion of the situated nature of technology and business ethics
(Martin and Freeman 2003). It is crucial to understand the
nature of the relationship and, drawing from stakeholder the-
ory, examine the power dynamics between parties, particu-
larly when considering the ethical implications of informed
consent. For example, a customer has the power to deny their
custom to an organization if they do not wish to consent to
their biometric data being collected. However, an employee
is less likely to quit their job.
role of the decision is pivotal. Using first-generation biomet-
rics for identity management could physically bar someone
from the workplace and prevent them from doing their job.
This pales in comparison to second-generation behavioral
biometrics. For example, in 2017, Westpac, an Australian
bank, stated their intention to trial biometrics to measure
the emotions and moods of their employees so that manage-
ment could intervene to improve stress levels (Eyers 2017).
This links to the literature about employee monitoring pre-
viously discussed (Brown 1996; Loch et al. 1998; Martin
and Freeman 2003; West and Bowman 2016). There is lit-
tle evidence of these examples in peer-reviewed research,
as organizations are likely to be reluctant to expose these
practices to external critical inquiry. However, the academic
munity can understand and influence these technologies and
ethical concerns.
conclusions can be drawn about how biometrics can be ethi-
cally used in, and by, organizations. The first is that organi-
zations are accountable for the technology they deploy.
Accountability does not stop with the firms designing the
algorithms, as biometric technology is limited and organi-
zations must be responsible for understanding the technol-
ogy they are using. Further, biometrics has the capacity to
exacerbate existing ethical concerns by facilitating unethical
decision-making. In the example of Westpac what would
prevent Westpac from using the data for performance man-
agement and discriminatory employment decisions?
the theory of double effect, argues that organizations must
account for the potential “evil” consequences of technolo-
gies that are intending to do “good.” Consider the potential
“evil” of the Westpac example. If an employee was consist-
ently rated as having a low mood, they may be considered
unmotivated. If they showed higher levels of stress, employ-
ers might feel they could not handle the pressure of their
role. In both situations, the employee may be experiencing
mental health problems such as depression or anxiety, which
may register as low moods or high stress levels. To circum-
vent this, the data would have to be anonymized. However,
if management were unaware of who they should target, they
may be unable to apply the appropriate intervention. When
reviewing the ethical implications of this scenario using a
framework such as Martin and Freeman’s (2003), significant
implications for self, relationship with others, community,
and property can be observed, given the infringements on
freedom, privacy, and respect. Introducing behavioral biom-
etrics into the workplace presents an ethical problem that
has the potential to cause more harm than good. Numerous
types of privacy are violated, in which anonymity cannot
be preserved, consent is constrained by power imbalance,
and the technologies possess in-built biases that facilitate
discriminatory decision-making.
etrics and business ethics may not be new; however, they
are more severe and complex. The evolution of biometric
technology accounts for seven types of privacy, including
privacy of thoughts and feelings (Finn et al. 2013). Previ-
ously, informational privacy was the main concern (Cavouk-
ian et al. 2012; Smith et al. 2013; Sutrop and Laas-Mikko
2012; Van der Ploeg 2003). The use of multiple biometrics
at any point has exponential ethical implications (Alter-
man 2003). The technology has become so advanced that
informed consent becomes problematic, as the technology
can capture data from subjects who are unaware (Norval
and Prasopoulou 2017; Sutrop and Laas-Mikko 2012). One
rics can facilitate discriminatory decisions but also the use
of technology provides the illusion of objectivity (Martin
2018). This seemingly enhances the veracity and validity of
what may be unethical practices. To illuminate these issues,
the following section provides a more detailed roadmap for
theoretical and empirical directions to advance research and
highlights the practical implications for organizations.
proposed above. Table 2 illustrates that there has not been
significant attention given to the relevant ethical frameworks
in the literature on biometrics in organizational contexts.
This is an important contextual approach to use in future
research. The extensive review by Mingers and Walsham
(2010) advocates Habermas’ discourse ethics as a theoreti-
cal lens. This sets an important foundation, as it allows for
an exploration of the nuance of people’s relationships with
technology, affording an appreciation of individual subjec-
tivity, as demonstrated through communications. Haber-
mas’ discourse ethics for examinations of biometrics in
organizational settings should be used, as it would capture
the situated and dynamic nature of the relationship between
technology and society. This was described by Martin and
Freeman (2003, 2004) and is present in Nissenbaum’s (2004,
2009, 2011) contextual integrity framework used by Winter
(2014).
tions deploying biometrics, are influenced by variations in
the technology and its applications. In Winter’s (2014) study
of retail customers, cameras for surveillance were acceptable
to ensure security in a retail environment. However, using
biometric technologies for eye tracking and emotion recogni-
tion made customers feel uncomfortable and raised ethical
concerns. As such, Nissenbaum’s (2004, 2009, 2011) con-
textual integrity framework merits further attention, particu-
larly in relation to privacy, as it facilitates an examination
of what happens when ethical standards developed in the
contexts of technological emergence migrate to new contexts
with less-established norms.
Nissenbaum’s (2004, 2009, 2011) framework of contextual
integrity to reflect the multifaceted and ever-changing terrain
of privacy concerns. Martin (2016) extended Nissenbaum’s
(2004, 2009, 2011) approach to develop social contract
theory. They recognized that people develop micro-social
contracts with each provider, technological artifact, and cir-
cumstances as they navigate the increasingly interconnected
holder complaints about privacy violations are often due
to changes in social contracts without consultation and
approval. Future research should consider this micro-social
contract narrative. This requires an examination of expecta-
tions about privacy from a stakeholder perspective, what is
considered when forming those norms (Martin 2016), and
apply it to biometrics in organizational contexts. For exam-
ple, Carpenter et al. (2018) surveyed attitudes on biometrics
by employees and discovered that professional affiliations
influenced an individual’s trust in an organization. The
study by Maltseva and Lutz (2018) of people using self-
tracking products discovered that privacy concerns had a
negative effect on trust and that self-tracking activities led
to increased disclosure of personal information. However,
they noted that the context in which the data were collected
played a key role. These studies did not examine these phe-
nomena from an ethics perspective. Thus, Martin’s (2016)
framework could be used as a theoretical lens to explain
such findings.
lance merits further research. There is ongoing attention
given to the ethical implications of workplace surveillance,
with Ball (2010) providing a recent review of the literature
and others making conceptual (Henschke 2017; Moore and
Piwek 2017) and empirical (Carpenter et al. 2018; Holland
et al. 2015) contributions. Using technology to monitor
employees is not novel (Brown 1996; Fairweather 1999).
Milligan (1999) raised the issue that biometric surveillance
for security purposes could be extended to observation
of employees to better understand behaviors, particularly
related to performance. Organizational surveillance litera-
ture draws from a multitude of theories in addition to those
utilized by Ball (2005), such as virtue ethics (West and
Bowman 2016), psychological reactance, planned behav-
ior, social identity theories (Martin et al. 2016), and routine
activity theory (de Vries and van Gelder 2015). The explana-
tory power of these theories when examining biometric sur-
veillance is a promising avenue for future research.
corporal surveillance and bodily integrity, which has consol-
idated into a body of literature on the “digitalisation of self.”
Deborah Lupton’s ongoing work in critical digital data stud-
ies and critical digital health studies is particularly useful
for theorizing the ethical effect of consumer product biom-
etrics, particularly those designed for self-tracking, such as
Fitbit and Apple Watch. Lupton examines key issues such as
power relationships between data subjects and technology
(Lupton 2016a), the complex sense-making processes peo-
ple undergo to understand and accept data (Lupton 2016b,
2017), potential anxiety caused by biometric self-tracking
(Lupton 2013), exposure to function creep, and potential
exploitation (Lupton 2016a). Lupton (2016c) points to
perspective to examine the ethical dilemmas pertaining to
risk associated with technological innovations. This view is
supported by Sutrop and Laas-Mikko (2012). The theorizing
by Lupton and others researchers (e.g., Henschke 2017) and
empirical work by Maltseva and Lutz (2018) on the digitized
self is valuable for investigating the effect of biometric data
capture on employees and consumers.
cal traditions of additional business ethics, such stakeholder
theory and corporate social responsibility (CSR), to more
squarely situate further research within business ethics
scholarship. As discussed in the previous section, ethical
implications are strongly related to the relationships between
individuals, biometrics, and their contextual environment.
As such, their position as data subject and stakeholder influ-
ence the nature of the ethical concern, making stakeholder
theory a natural theoretical lens for such an analysis. For
example, if FRT is used in a retail environment to gather data
on shoppers and employees, how would the ethical impli-
cations differ for the separate groups? Stakeholder theory
can illuminate the complicated social relationships between
data subjects and organizations, enabling an exploration of
ethical issues such as relational trust (Greenwood and Van
Buren III 2010). Similarly, Sheehy’s (2015, p. 635) concep-
tualization of CSR as “an international private business self-
regulation” indicates that an examination of self-regulation
practices of businesses using biometrics would benefit from
drawing on the CSR literature, paying particular attention to
the responsibility and accountability of organizations as they
deploy biometric technology. Given that technology law is
slow and inconsistently applied (Lodge 2012), the ethical
use of biometric technology could be well positioned within
CSR research. This could assist with providing a larger
empirical foundation for much-needed data collection and
position the ethical use of biometrics and other technologies
as a key responsibility for socially responsible organizations.
ics in applied business and organizational contexts. There-
fore, there are many opportunities for empirical advance-
ment. The contextual nature of ethical concerns means that
data should be obtained from multiple settings and situated
within the broader environment, for example, giving atten-
tion to the potential influence of the legal environment of
ethical practice and contextual nuances, such as industry-
level environmental factors. McStay’s (2014, 2016, 2018)
research on the affordances and concerns of biometrics that
captures and analyzes people’s emotions demonstrates that
there is a strong business case for using the technology in
retail organizations, marketing, and advertising. It would
better buying experience.
ture. Different national legal environments allow for differ-
ent degrees of privacy when collecting data, as observed in
China’s strict internet governance (Hou 2017) and India’s
biometric identity card (Sud and VanSandt 2015). Cultural
values and attitudes also have an influence. As privacy is
considered by some a “value” not a “right” (Mingers and
Walsham 2010; Winter 2014), attitudes toward privacy could
be influenced by national cultural values. This view has
support (Li et al. 2017; Miltgen and Peyrat-Guillard 2014);
however, not all research points to differences in cultural
attitudes toward privacy (e.g., Pentina et al. 2016), which
allows for empirical contributions.
take many methodological forms. Due to the importance
of a contextual understanding, qualitative and exploratory
research is required to build empirical foundations. Mixed
methods case studies allow for an appreciation of the situ-
ational factors that influence privacy and other ethical
concerns. Although industry collaboration on case studies
would be immensely valuable, due to the sensitive nature of
the topic and privacy as the main ethical concern, finding
industry partners willing to provide data about their activi-
ties, even ethical ones, is likely to be difficult. Quantitative
surveys to assess broader themes, such as consumer attitudes
or industry-level themes, would be worthwhile. For example,
McStay’s (2018) qualitative research into behavioral biomet-
rics using 100 interview subjects can be scaled using quanti-
tative methodologies, considering contexts beyond the UK,
and gathering data to facilitate cross-cultural comparisons.
cles, Table 3 illustrates areas for future research and implica-
tions for practice.
literature, other forms of privacy must be accounted for in
research and practice. The distinctions between categories
of privacy must be meaningfully addressed and debated.
Because second-generation biometrics is arguably more
invasive than first generation (Jain and Kumar 2012), explor-
ing the different types of privacy and ethical implications
associated with the different generations is a way forward
for researchers. For practice, given the focus on informa-
tional privacy, other forms of privacy may not be recognized
in professional guidelines, organizations policies, codes of
ethics, and legislation. Organizations should assess the way
they conceptualize and practice privacy for the various bio-
metric technologies they use with a critical awareness of the
categories of privacy described by Finn et al. (2013).
cles, but there is limited evidence of how it is practiced,
particularly pertaining to applied biometrics. As attitudes Ta
e
E
pi
ri
ca
dv
ce
m
en
t a
nd
pl
at
io
ns
r p
ct
e
Pr
iv
y
em
Fu
re
se
ch
pl
at
io
ns
fo
r p
ra
ct
ic
e
at
or
s
p
ri
va
E
m
at
n
th
di
er
t d
ni
ns
o
f p
ri
va
(b
on
in
rm
at
io
l p
–
cy
nd
e
ri
le
th
al
im
pl
ic
at
io
ns
cr
s
ro
c
er
e
p
ri
va
ty
s
p
fe
io
l g
de
es
rg
iz
at
io
na
l p
ic
s
an
d
de
of
e
th
s,
ot
st
fo
rm
at
io
na
ri
cy
Pr
iv
ac
y
ra
x
at
on
ta
ho
er
v
w
of
ar
us
io
m
et
ri
c
te
ch
no
lo
es
ow
e
pr
iv
ac
y
pa
ra
x
p
ct
ed
ow
c
on
tr
ol
a
nd
ow
in
en
s
e
pa
ra
do
x
lic
s
an
d
pr
ac
es
en
tiv
to
e
pr
iv
ac
y
pa
ra
x.
ue
io
as
m
io
o
co
is
nc
in
ri
va
cy
xp
ta
ns
ar
ul
ly
re
tio
to
n
di
du
’s
fo
rm
at
io
di
lo
re
fo
ed
c
on
se
nt
T
he
at
e
‘i
or
ed
In
st
at
ns
f c
te
s
w
ch
fo
rm
ed
c
on
se
nt
u
se
or
ot
ow
d
pa
tie
em
dd
in
on
se
nt
g
la
ns
ps
nd
or
na
y
fo
rm
ed
c
on
se
nt
ho
d
s
nd
d.
ro
di
te
no
gi
l e
la
tio
in
y
rm
fo
he
en
er
al
ub
. A
nt
n
p
er
d
is
ri
s
ou
b
re
xi
ac
e.
ns
e
en
s
r w
dr
al
f c
on
se
a
d
a
le
n
eg
at
y
vi
nm
t
O
rg
an
iz
at
io
l r
po
es
le
sl
iv
ch
ge
E
ic
al
p
nc
le
un
rp
ni
th
os
ve
nd
eg
iv
se
-r
ul
io
C
es
o
f e
th
ic
in
ul
at
na
nv
on
en
se
-r
eg
ul
at
io
n
pr
tic
, p
va
ce
, a
e
ic
im
ct
ss
es
sm
ts
is
im
in
at
io
n
ng
ng
cc
ac
te
in
an
ex
or
io
of
se
s
of
b
io
m
et
ri
cs
a
nd
pa
o
si
-m
in
in
rg
an
iz
at
io
fo
m
oy
s
w
l a
cu
om
s
ss
sm
t o
se
f b
m
et
ri
, i
en
nd
ur
se
ri
ip
. H
d
s
in
rm
ci
si
on
-m
ak
in
ar
nd
m
pl
oy
m
en
ec
io
a
is
u
d
on
id
ot
r
et
ds
at
ro
ot
eq
ta
e
ci
on
p
ro
ss
a
o
co
es
A
ar
es
of
cu
ci
in
re
in
e
te
ch
no
lo
gy
ships between the technology and the organization, greater
attention to the dynamic nature of the privacy paradox
is required. For example, Maltseva and Lutz (2018)
showed that trust is a key factor related to privacy con-
cerns. They posited that privacy concerns are influenced
by the context, purpose, and nature of the data-collection
process. Drawing from stakeholder theory could inform a
more holistic view of how the paradox may differ across
organizational stakeholders and provide a more nuanced
understanding of the way power disparities between par-
ties influence ethical implications and translate into more
effective organizational policies and practices.
empirical research can contribute to questions such as
whether people are sufficiently informed and able to under-
stand complex biometric technology to be able to properly
consent (Hoofnagle et al. 2010; Park 2013), and in what
contexts covert data occur to prevent the opportunity for
informed consent (Norval and Prasopoulou 2017; Sprok-
kereef and de Hert 2012). An additional concern is propor-
tionality, that is, whether there is a proportional increase
in gains for data subjects and the organizations (Alterman
2003), particularly if it is being used to gather data from
employees (Karkazis and Fishman 2017). These themes
are also important for ethical practice. Organizations must
ensure that subjects are informed about the nature of the
technology and the processes governing the resultant data.
Further, they must guarantee that the study is beneficial
to the subjects.
enforce ethical practice; however, there are limitations. For
example, only healthcare and financial institutions are cov-
ered by existing US legislation on biometric data capture
(Naker and Greenbaum 2017). Even the European GDPR
does not ensure consistency in practices (Custers et al.
2018). Further evidence is required about how organiza-
tions comply or evade legislation, how they seek to engage
in ethical technology practices, and the ethical principles
used to justify behaviors. The study by Winter et al. (2004)
about the ethical attitudes of IT professionals found that
Machiavellianism and ethical relativism influenced accept-
ance of intellectual property and privacy rights violations,
which varied across professional groups and workplace
experiences. Given that this research illustrates the ease
with which individuals violate legal and ethical principles,
more empirical evidence of such phenomena is required,
particularly across cultural contexts and legal environ-
ments. Because working in multinational contexts provides
additional concerns, a multinational organization’s code
of ethics on the use of biometric technology must also be
global in its orientation. As legal changes are outpaced
organizations must attend to their ethical obligations first.
of “discrimination” has significant ethical implications.
Discrimination is prevalent in first- and second-generation
behavioral biometrics (Schumacher 2012). Therefore, com-
mercial use of biometrics should only be deployed with a
deep respect for the purpose principle, only using it when
it is the most effective way to gather data about the indi-
vidual without harming their rights (Lodge 2006, 2012). The
research community must also be engaged in this topic to
gather the necessary data to support the ethical use of biom-
etrics to prevent discrimination. This includes research about
which organizations are collecting these data and why, and
how it is being used to inform decision-making. A multidis-
ciplinary approach is advocated, as biometrics ethics trav-
erses the confines of one discipline. For example, although
it is not the explicit domain of business ethics, attention to
the findings of the accuracy tests performed by the National
Institute of Standards and Technology (NIST 2018) would
allow an appreciation of the accuracy of biometric technol-
ogy and how technology accuracy influences discriminatory
practices.
organizations, as developers, users, and deployers of tech-
nology. While the disciplines of computer and technology
ethics extensively explore the implications of biometrics, the
ethical implications with which organizations must contend
have gained comparatively little attention. The aim of this
article was to review the smaller body of research about
biometrics and business ethics and draw from the larger
research areas to establish an empirical and conceptual foun-
dation for further research.
few articles about biometrics and business ethics. This was
offset by incorporating the broader literature on biometrics
and ethics into a discussion about the main themes and theo-
ries. Given that the current and potential scope of the litera-
ture on biometrics and ethics is vast, it is likely that there are
additional ethical concerns worth the attention of business
ethics researchers that were not examined here.
with a need for ongoing attention given to theoretically and
empirically bridging the gap between biometrics research
and business ethics. Future research must acknowledge the
contextual and complex nature of people’s relationships with
technology and consider the particularities of the ethical
concerns of biometrics, not technology in general. Because
second-generation biometrics has greater capacity to be used
of biometrics and ethics would not capture the subtleties
of an individual’s relationship with the technology and the
organization.
upheld, business ethics research becomes increasingly
important to ensure individual rights and civil liberties are
not compromised. Computer and technology ethics research-
ers have made valuable contributions; however, the business
and organizational implications of biometrics is not their
primary ethical concern. As organizations are the site of the
development and deployment of biometric technology, and
biometrics has the capacity to incite unique ethical concerns,
it would be beneficial to the community to give this topic
greater attention.
informed this updated version; I greatly appreciate their efforts and
useful suggestions in improving the manuscript. I would also like to
acknowledge the research assistance of Ishan Senarathna. I would also
like to thank Nicholas Patterson and Matthew Warren for their feedback
and guidance in the early stages.
ness School, Centre for Sustainable and Responsible Organisations
(CSaRO).
Data Protection Law Review, 2, 287–289.
etric identification. Ethics and Information Technology, 5(3),
139–150.
politics of resistance. Organization, 12(1), 89–108.
metric authentication: A review. International Journal of u-and
e-Service, Science and Technology, 2(3), 13–28.
it’s dangerous for black people. https ://www.thegu ardia n.com/
techn ology /2017/dec/04/racis t-facia l-recog nitio n-white -coder
s-black -peopl e-polic e. Accessed 1 Nov 2018.
Journal of Business Ethics, 15(11), 1237–1248.
tional innovation. Journal of Business Ethics, 143(2), 223–226.
ness: Rethinking the moral dilemma. Journal of Business Ethics,
41(1–2), 45–50.
York: Springer.
and biometrics: An empirical examination of employee concerns.
Information Systems Frontiers, 20(1), 91–110.
biometric encryption: Taking privacy by design from academic
research to deployment. Review of Policy Research, 29(1),
37–61.
brave new world. IEEE Technology and Society Magazine, 35(3),
59–66.
(2018). A comparison of data protection legislation and poli-
cies across the EU. Computer Law and Security Review, 34(2),
234–243.
assessment of affective experience and expression during deep
learning. International Journal of Learning Technology, 4(3–4),
165–187.
affect detection from conversational cues, gross body language,
and facial features. User Modeling and User-Adapted Interac-
tion, 20(2), 147–187.
adaptations, and critiques. In Handbook of ethics, values, and
technological design (pp. 11–40). New York: Springer.
delinquency: The role of Honesty–Humility, ethical culture, and
employee surveillance. Personality and Individual Differences,
86, 112–116.
crimination: A meta-analytic extension, critique, and future
research agenda. Personnel Psychology, 71(2), 147–179.
ethics. Journal of Business Ethics, 152(1), 1–14.
access management: Risks and best practices. Journal of Library
Administration, 47(3–4), 141–162.
ity. https ://www.econo mist.com/news/scien ce-and-techn ology
/21728 614-machi nes-read-faces -are-comin g-advan ces-ai-are-
used-spot-signs . Accessed 1 Nov 2018.
tion Regulation (GDPR) govern? https ://ec.europ a.eu/info/law/
law-topic /data-prote ction /refor m/what-does-gener al-data-prote
ction -regul ation -gdpr-gover n_en. Accessed 28 Dec.
what conditions? https ://ec.europ a.eu/info/law/law-topic /data-
prote ction /refor m/rules -busin ess-and-organ isati ons/princ iples
-gdpr/what-data-can-we-proce ss-and-under -which -condi tions
_en. Accessed 29 Dec.
and elite sport: The need for a new governance framework. Sci-
ence and Engineering Ethics, 23(6), 1487–1505.
Australian Financial Review. November 14. https ://www.afr.com/
techn ology /westp ac-testi ng-ai-to-monit or-staff -and-custo mers-
20171 114-gzks7 h. Accessed 29 Dec 2018.
teleworking. Journal of Business Ethics, 22(1), 39–49.
(2013). Automatically recognizing facial expression: Predicting
engagement and frustration. In Proceedings of the 6th Interna-
tional Conference on Educational Data Mining (pp. 43–50).
holder theory: Trustworthiness in the organisation–stakeholder
relationship. Journal of Business Ethics, 95(3), 425–438.
Following the viewers: Investigating television drama engage-
ment through skin conductance measurements. Poetics, 64, 1–13.
$1,500 to use devices, Fitbit co-founder says. https ://www.cnbc.
com/2017/01/05/unite dheal thcar e-and-fitbi t-to-pay-users -up-to-
1500-to-use-devic es.html. Accessed 4 Nov 2018.
& Alt, F., EngageMeter: A system for implicit audience engage-
ment sensing using electroencephalography. In Proceedings of
the 2017 CHI Conference on Human Factors in Computing Sys-
tems, 2017 (pp. 5114–5119): ACM.
mation and virtual identities. Cambridge: Cambridge University
Press.
in Society, 49, 31–36.
and surveillance in the workplace: The effects on trust in man-
agement, and the moderating role of occupational type. Person-
nel Review, 44(1), 161–175.
ferent are young adults from older adults when it comes to infor-
mation privacy attitudes and policies?. Available at https ://doi.
org/10.2139/ssrn.15898 64.
rising market for online opinion surveillance in China. Surveil-
lance and Society, 15(3/4), 418–424.
survey of policy and implementation issues. Lancaster University
Management School Working Paper (Vol. 2010/030): Lancaster
University.
Communications of the ACM, 43(2), 90–98.
In Second generation biometrics: The ethical, legal and social
context (pp. 49–79). New York: Springer.
Introduction to biometrics (pp. 1–49). New York: Springer.
letes: The ethics of biometric technologies. The American Jour-
nal of Bioethics, 17(1), 45–60.
review of studies on visitor experiences in museums. Curator,
55(4), 435–452.
ping the experience of fine art. Curator, 58(2), 169–193.
K. (2012). Face recognition performance: Role of demographic
information. IEEE Transactions on Information Forensics and
Security, 7(6), 1789–1801.
review of current research on the privacy paradox phenomenon.
Computers and Security, 64, 122–134.
cultural privacy prediction. Proceedings on Privacy Enhancing
Technologies, 2017(2), 113–132.
monitoring in the workplace: A debate on technology and ethics.
Journal of Business Ethics, 17(6), 653–663.
nal of Information, Communication and Ethics in Society, 4(3),
131–144.
and new biometrics. In Second generation biometrics: The ethi-
cal, legal and social context (pp. 305–328). New York: Springer.
ing health in the age of mHealth technologies. Critical Public
Health, 23(4), 393–403.
tracking modes and dataveillance. Economy and Society, 45(1),
101–122.
Digital sociologies, 335–350.
concepts of data. In Lifelogging (pp. 61–79): Springer.
of personal digital data. New Media and Society, 19(10),
1599–1614.
of research. Science and Engineering Ethics, 19(2), 461–486.
self-quantification and self-disclosure. Computers in Human
Behavior, 81, 102–114.
Hung, H. (2015). How was it?: Exploiting smartphone sensing
to measure implicit audience responses to live performances.
In Proceedings of the 23rd ACM International conference on
Multimedia (pp. 201–210). Brisbane Australia.
(2017). Visualizing, clustering, and predicting the behavior
of museum visitors. Pervasive and Mobile Computing, 38,
430–443.
your work: How empowerment affects the relationship between
electronic surveillance and counterproductive work behaviours.
The International Journal of Human Resource Management,
27(21), 2635–2651.
social contract approach to privacy. Journal of Business Ethics,
137(3), 551–569.
rithms. Journal of Business Ethics, 1–16.
monitoring. Journal of Business Ethics, 43(4), 353–361.
and ethics in business ethics. Journal of Business Ethics, 53(4),
353–364.
Graesser, A. (2007). Facial features for affective state detection
in learning environments. In D.S. McNamara and J.G. Trafton
(Eds.) Proceedings of the 29th Annual Cognitive Science Society,
(pp. 467–472), Austin.
protocol. New York, Peter Lang.
legal and citizen perspectives (the case for intimacy). Big Data
and Society, 3(2), 1–11.
Sage.
lance, and privacy. Southern California Interdisciplinary Law
Journal, 9, 295.
influences on privacy concerns: A qualitative study in seven
European countries. European Journal of Information Systems,
23(2), 103–125.
tems: The contribution of discourse ethics. MIS Quarterly, 34(4),
833.
quantified workplace. Employee Relations, 39(3), 308–316.
ing in the workplace: Legislative and ethical implications. Jour-
nal of Business Ethics, 29(4), 365–378.
do: Facial recognition technology and the growing lack of privacy.
Boston University Journal of Science & Technology Law, 23, 88.
Review, 79, 119–158.
integrity of social life. Stanford: Stanford University Press.
dalus, 140(4), 32–48.
ognition Vendor Test. https ://www.nist.gov/progr ams-proje cts/
face-recog nitio n-vendo r-test-frvt. Accessed 4 Nov.
of the diffusion of face recognition technologies in online social
networks. New Media and Society, 19(4), 637–654.
waiting to happen? Journal of Business Ethics, 145(3), 481–491.
Koenig, N. H. A., et al. (2016). If you build it, they will come:
Unintended future uses of organised health data collections. BMC
Medical Ethics, 17(54), 1–16.
munication Research, 40(2), 215–236.
Wearable technology, hands-off data collection, and new policy
imperative. Journal of Business Ethics, 142(1), 71–82.
paradox in information-sensitive mobile app adoption: A cross-
cultural comparison. Computers in Human Behavior, 65, 409–419.
Security and privacy concerns. IEEE Security and Privacy, 99(2),
33–42.
ethical issues of digitization. Ethics and Information Technology,
20(2), 127–142.
magnitude of disruptive technologies. Technological Forecasting
and Social Change, 129, 261–274.
ethical risks. In Second Generation Biometrics: The Ethical, Legal
and Social Context (pp. 215–227). New York: Springer.
Business Ethics, 131(3), 625–648.
of the New York Academy of Sciences, 1387(1), 61–72.
veillance texts and textualism: Truthtelling and trustmaking in an
uncertain world. Surveillance and Society, 11(3), 215–221.
emotion detection using EEG signals and facial expressions. In
Multimedia and Expo (ICME), 2014 IEEE International Confer-
ence on, 2014 (pp. 1–6): IEEE**.
In Second generation biometrics: The ethical, legal and social con-
text (pp. 81–101). New York: Springer.
inclusive growth. Journal of Business Ethics, 132(3), 589–601.
behavior prediction: Ethical implications of second generation
biometrics. Review of Policy Research, 29(1), 21–36.
impact of privacy concerns, individual characteristics, and the
perceived social relevance on different forms of self-disclosure.
Journal of Computer-Mediated Communication, 19(2), 248–273.
concerns about identification technology. International Journal of
Retail and Distribution Management, 34(8), 609–620.
keting: “I Consume, Therefore I am!”. Science and Engineering
Ethics, 21(5), 1271–1284.
nology along with trends and prospects. Pattern Recognition, 47(8),
2673–2688.
of theorizing technology. Information, Communication and Society,
6(1), 85–104.
patterns across scenarios. In Proceedings of the 8th Nordic Confer-
ence on Human-Computer Interaction: Fun, Fast, Foundational,
2014 (pp. 501–510): ACM.
ceedings of the 2017 ACM SIGCHI Conference on Creativity and
Cognition, 2017 (pp. 336–347): ACM.
audience. In Proceedings of the 32nd annual ACM conference
on Human factors in computing systems, 2014 (pp. 1909–1912):
ACM.
fying audience experience in the wild: Heuristics for developing
and deploying a biosensor infrastructure in theaters. In Quality of
Multimedia Experience (QoMEX), 2016 Eighth International Con-
ference on IEEE, 2016 (pp. 1–6).
Understanding how new technologies transform performance
experiences. In Proceedings of the 19th ACM Conference on
Computer-Supported Cooperative Work and Social Computing,
2016 (pp. 432–437): ACM.
An ethical analysis. Administration and Society, 48(5), 628–651.
The faces of engagement: Automatic recognition of student engage-
mentfrom facial expressions. IEEE Transactions on Affective Com-
puting, 5(1), 86–98.
tive conflicts related to the consumer in-store supermarket experi-
ence in the context of the Internet of Things. Ethics and Informa-
tion Technology, 16(1), 27–41.
differences in the acceptability of unethical information technol-
ogy practices: The case of Machiavellianism and ethical ideology.
Journal of Business Ethics, 54(3), 273–301.
Seton Hall Law Review, 47(4), 995–1020.
recognition: A literature survey. ACM Computing Surveys, 35(4),
399–458.
Wickins, J. (2007). The ethics of biometrics: the risk of social exclusion
Engineering Ethics, 13(1), 45–54.
In European data protection: coming of age (pp. 3–32). Dordrecht:
Springer.
etric recognition. IEEE Transactions on circuits and systems for
video technology, 14(1), 1–29.
jurisdictional claims in published maps and institutional affiliations. Biometric Technology and Ethics: Beyond Security Applications
Abstract
Introduction
Biometrics: An Overview of Application and Purpose
Technology and Business Ethics
Literature Review
Conceptualizations of Privacy
Privacy Paradox
Informed Consent
Regulatory Frameworks and Guidelines
Discrimination
The State of Biometrics and Business Ethics Research
Future Research
Theoretical Perspectives
Empirical Advancement and Practical Implications
Conclusion
Acknowledgements
References
IT513 1
Unit 2 Assignment
Student’s name: Roger Dominguez
Today’s date:
Quotes
Parenthetical style quote
“In order to deter the opponent cyber-provocation and to gain dominance in the cyber-
warfare, we must collect information, make decisions, and act before the enemy” (Kim et
al., 2018, p. 76).
Narrative style quote
However, it was stated by Kim et al. (2018) “It is necessary to change the traditional
defensive cyber warfare strategy […] with the future cyber battlefield environment”
(p. 79).
Reference entry
S.K. Kim, S.P. Cheon, & J.H Eom (2018, October 5). A leading cyber warfare strategy according
to the evolution of cyber technology after the fourth industrial revolution. International
Journal of Advanced Computer Research, Vol. 9 Issue 40, p72-80.
First
Paraphrase
Descriptive title:
Using Virtual Reality for Drivers with PTSD
Paraphrase
A study was conducted in the United States and data was shown that following a car
accident many drivers suffer from a form of PTSD. Drivers have a phobia to continue
driving after their accidents. The psychology department at the University of Würzburg,
has tried to implement virtual reality to treat these drivers of their fears. Their regiment
included a medical and psychological evaluation. It proceeded with two preparatory
therapy sessions to get the patients ready. The focus of the treatment was to get the
afflicted to do five virtual exposure sessions, meaning a virtual driving simulation. If they
completed five successfully, they were then given a final Behavioral Avoidance Test
This study source was downloaded by 100000756573697 from CourseHero.com on 02-16-2022 19:57:29 GMT -06:00
https://www.coursehero.com/file/81009613/IT513-Unit2-Dominguez-Rogerdocx/
https://www.coursehero.com/file/81009613/IT513-Unit2-Dominguez-Rogerdocx/
IT513 Assignment 2 2
(BAT). This final test was done on the physical road. Once they completed the full
treatment, they were given a closing evaluation and two follow-up calls in six-week
increments. According to the researchers Kaussner et al (2020), the results showed that
out of the fourteen drivers that this method was tested, seventy-one percent had achieved
the minimum requirement measured by their driving instructor. A greater percentage of
ninety-three percent were able to maintain the treatment’s effects following their final
phone call in the program (p.8). The end results showed that virtual reality as a form of
simulated assistance has promise to treat driver phobia.
Reference entry
Y. Kaussner, A.M Kuraszkiewicz, S. Schoch, P. Markel, S. Hoffman, R. Baurstreubel, R.
Kenntner-Mabiala & P. Pauli (2020, January 7). Treating patients with driving phobia by
virtual reality exposure therapy – a pilot study. PLoS ONE. Vol. 15 Issue 1, p1-14. 14p
Second Paraphrase
Descriptive title:
The History and Impact of Artificial Intelligence
Paraphrase
The history of artificial intelligence is one that is longer by imagination and shorter if
referencing when it was used in practice. The start of AI dated back to the 1940s but only
in fiction. Science fiction writer Isaac Asimov wrote a story about a robot who operated
on the principles of AI such as self-learning. The actual term itself was not coined until
more than ten years later by Marvin Minsky and John McCarthy for a workshop they
held surrounding the field at Dartmouth College. This conference furthered advancement
in the field, but the United States government saw no value in how expensive it was to
conduct this research with how slow it was progressing. The advancement of computers
was able to revive AI research in the 1990s and the computing power that was lacking in
the past brought new possibilities moving into the future. The modern era of AI has
adapted to be able to input billions of megabytes of information for big data storage. The
authors Haenlein and Kaplan (2019) highlight major issues going forward, revolve
around the necessity for AI and how dependent humans may become on it. The evolution
of AI has moved rapidly over the past twenty years that a new problem of, how does one
regulate an industry with swift evolution (p.9)?
Reference entry
M. Haenlein & A. Kaplan (2019). A Brief History of Artificial Intelligence: On the Past, Present
and Future of Artificial Intelligence. California Management Review. Vol. 61 Issue 4, p5-
14. 10p.
This study source was downloaded by 100000756573697 from CourseHero.com on 02-16-2022 19:57:29 GMT -06:00
https://www.coursehero.com/file/81009613/IT513-Unit2-Dominguez-Rogerdocx/
Powered by TCPDF (www.tcpdf.org)
https://www.coursehero.com/file/81009613/IT513-Unit2-Dominguez-Rogerdocx/
http://www.tcpdf.org