Workplace ethics in information technology case studies

By helping people form these connections, we hope to rewire the way people spread and consume information. The social value of perusing this is debatable, but the economic value has been undeniable. At the time this was written, Mark Zuckerberg has been constantly listed in the top ten richest billionaires by Forbes Magazine where he is typically in the top five of that rarefied group. An achievement built on providing a free service to the world. What companies like Facebook do charge for are services, such as directed advertising, which allow third party companies to access information that users have provided to the social media applications.

The result is that ads bought on an application such as Facebook are more likely to be seen as useful to viewers who are much more likely to click on these ads and buy the advertised products. The more detailed and personal the information shared, the more valuable it will be to the companies that it is shared with. This radical transparency of sharing deeply personal information with companies like Facebook is encouraged. Those who do use social networking technologies do receive value as evidenced by the rapid growth of this technology.

Statista reports that in there will be 2. Even before companies like Facebook were making huge profits, there were those warning of the dangers of the cult of transparency with warning such as:. Transparency destroys secrecy: but it may not limit the deception and deliberate misinformation that undermine relations of trust. If we want to restore trust we need to reduce deception and lies, rather than secrecy. In the case of Facebook we can see that some of the warnings of the critics were prescient. In April of , Mark Zuckerberg was called before congress where he apologized for the actions of his corporation in a scandal that involved divulging a treasure trove of information about his users to an independent researcher, who then sold it to Cambridge Analytica, which was a company involved in political data analysis.

This data was then used to target political ads to the users of Facebook. Many of which were fake ads created by Russian intelligence to disrupt the US election in Au-Yeung, She notes that those in favor of developing technologies to promote radically transparent societies, do so under the premise that this openness will increase accountability and democratic ideals. But the paradox is that this cult of transparency often achieves just the opposite with large unaccountable organizations that are not democratically chosen holding information that can be used to weaken democratic societies.

This is due to the asymmetrical relationship between the user and the companies with whom she shares all the data of her life. The user is, indeed radically open and transparent to the company, but the algorithms used to mine the data and the 3rd parties that this data is shared with is opaque and not subject to accountability. We, the users of these technologies, are forced to be transparent but the companies profiting off our information are not required to be equally transparent.

Malware and computer virus threats continue to grow at an astonishing rate. Security industry professionals report that while certain types of malware attacks such as spam are falling out of fashion, newer types of attacks such as Ransomware and other methods focused on mobile computing devices, cryptocurrency, and the hacking of cloud computing infrastructure are on the rise outstripping any small relief seen in the slowing down of older forms of attack Cisco Systems ; Kaspersky Lab , McAfee , Symantec What is clear is that this type of activity will be with us for the foreseeable future.

In addition to the largely criminal activity of malware production, we must also consider the related but more morally ambiguous activities of hacking, hacktivism, commercial spyware, and informational warfare. Each of these topics has its own suite of subtle moral ambiguities. We will now explore some of them here. While there may be wide agreement that the conscious spreading of malware is of questionable morality there is an interesting question as to the morality of malware protection and anti-virus software.

With the rise in malicious software there has been a corresponding growth in the security industry which is now a multibillion dollar market. Even with all the money spent on security software there seems to be no slowdown in virus production, in fact quite the opposite has occurred. This raises an interesting business ethics concern; what value are customers receiving for their money from the security industry?

The massive proliferation of malware has been shown to be largely beyond the ability of anti-virus software to completely mitigate. There is an important lag in the time between when a new piece of malware is detected by the security community and the eventual release of the security patch and malware removal tools. The anti-virus modus operandi of receiving a sample, analyzing the sample, adding detection for the sample, performing quality assurance, creating an update, and finally sending the update to their users leaves a huge window of opportunity for the adversary … even assuming that anti-virus users update regularly.

Aycock and Sullins In the past most malware creation was motivated by hobbyists and amateurs, but this has changed and now much of this activity is criminal in nature Cisco Systems ; Kaspersky Lab , McAfee , Symantec Aycock and Sullins argue that relying on a strong defense is not enough and the situation requires a counteroffensive reply as well and they propose an ethically motivated malware research and creation program.

CCAB ethical case studies

This idea does run counter to the majority opinion regarding the ethics of learning and deploying malware. Many computer scientists and researchers in information ethics agree that all malware is unethical Edgar ; Himma a; Neumann ; Spafford ; Spinello According to Aycock and Sullins, these worries can be mitigated by open research into understanding how malware is created in order to better fight this threat When malware and spyware is created by state actors, we enter the world of informational warfare and a new set of moral concerns.

  1. write an argumentative essay on gender bias.
  2. Emerald | Journal of Systems and Information Technology information.
  3. A changing profession.
  4. messenger-chalmers dissertation prize;
  5. Search form!

Every developed country in the world experiences daily cyber-attacks, with the major target being the United States that experiences a purported 1. The majority of these attacks seem to be just probing for weaknesses but they can devastate a countries internet such as the cyber-attacks on Estonia in and those in Georgia which occurred in While the Estonian and Georgian attacks were largely designed to obfuscate communication within the target countries more recently informational warfare has been used to facilitate remote sabotage.

Case studies

The famous Stuxnet virus used to attack Iranian nuclear centrifuges is perhaps the first example of weaponized software capable of creating remotely damaging physical facilities Cisco Systems The coming decades will likely see many more cyber weapons deployed by state actors along well-known political fault lines such as those between Israel-America-western Europe vs Iran, and America-Western Europe vs China Kaspersky Lab The moral challenge here is to determine when these attacks are considered a severe enough challenge to the sovereignty of a nation to justify military reactions and to react in a justified and ethical manner to them Arquilla ; Denning , Kaspersky Lab The primary moral challenge of informational warfare is determining how to use weaponized information technologies in a way that honors our commitments to just and legal warfare.

Since warfare is already a morally questionable endeavor it would be preferable if information technologies could be leveraged to lessen violent combat. For instance, one might argue that the Stuxnet virus used undetected from to did damage to Iranian nuclear weapons programs that in generations before might have only been accomplished by an air raid or other kinetic military action that would have incurred significant civilian casualties—and that so far there have been no reported human casualties resulting from Stuxnet. Thus malware might lessen the amount of civilian casualties in conflict.

Workplace ethics in information technology case studies

One might argue that more accurate information given to decision makers during wartime should help them make better decisions on the battlefield. On the other hand, these new informational warfare capabilities might allow states to engage in continual low level conflict eschewing efforts for peacemaking which might require political compromise. As was mentioned in the introduction above, information technologies are in a constant state of change and innovation. The internet technologies that have brought about so much social change were scarcely imaginable just decades before they appeared.

Even though we may not be able to foresee all possible future information technologies, it is important to try to imagine the changes we are likely to see in emerging technologies. James Moor argues that moral philosophers need to pay particular attention to emerging technologies and help influence the design of these technologies early on to encourage beneficial moral outcomes Moor The following sections contain some potential technological concerns.

An information technology has an interesting growth pattern that has been observed since the founding of the industry. Intel engineer Gordon E. Moore noticed that the number of components that could be installed on an integrated circuit doubled every year for a minimal economic cost and he thought it might continue that way for another decade or so from the time he noticed it in Moore History has shown his predictions were rather conservative. This doubling of speed and capabilities along with a halving of costs to produce it has roughly continued every eighteen months since and is likely to continue.

This phenomenon is not limited to computer chips and can also be found in many different forms of information technologies.

  • Journal of Information, Communication and Ethics in Society!
  • essay of why i love pakistan?
  • opium war thesis statement;
  • 9.8 Case Studies of Ethics;
  • pip and joes relationship in great expectations essay!
  • The potential power of this accelerating change has captured the imagination of the noted inventor and futurist Ray Kurzweil. He has famously predicted that if this doubling of capabilities continues and more and more technologies become information technologies, then there will come a point in time where the change from one generation of information technology to the next will become so massive that it will change everything about what it means to be human.

    If this is correct, there could be no more profound change to our moral values. For example Mary Midgley argues that the belief that science and technology will bring us immortality and bodily transcendence is based on pseudoscientific beliefs and a deep fear of death. In a similar vein Sullins argues that there is often a quasi-religious aspect to the acceptance of transhumanism that is committed to certain outcomes such as uploading of human consciousness into computers as a way to achieve immortality, and that the acceptance of the transhumanist hypothesis influences the values embedded in computer technologies, which can be dismissive or hostile to the human body.

    Just because something grows exponentially for some time, does not mean that it will continue to do so forever… Floridi, While many ethical systems place a primary moral value on preserving and protecting nature and the natural given world, transhumanists do not see any intrinsic value in defining what is natural and what is not and consider arguments to preserve some perceived natural state of the human body as an unjustifiable obstacle to progress.

    Not all philosophers are critical of transhumanism, as an example Nick Bostrom of the Future of Humanity Institute at Oxford University argues that putting aside the feasibility argument, we must conclude that there are forms of posthumanism that would lead to long and worthwhile lives and that it would be overall a very good thing for humans to become posthuman if it is at all possible Bostrom, Artificial Intelligence AI refers to the many longstanding research projects directed at building information technologies that exhibit some or all aspects of human level intelligence and problem solving.

    Artificial Life ALife is a project that is not as old as AI and is focused on developing information technologies and or synthetic biological technologies that exhibit life functions typically found only in biological entities. A more complete description of logic and AI can be found in the entry on logic and artificial intelligence. ALife essentially sees biology as a kind of naturally occurring information technology that may be reverse engineered and synthesized in other kinds of technologies. Both AI and ALife are vast research projects that defy simple explanation.

    Instead the focus here is on the moral values that these technologies impact and the way some of these technologies are programmed to affect emotion and moral concern. In , he made the now famous claim that. A description of the test and its implications to philosophy outside of moral values can be found here see entry on the Turing test.

    For example, Luciano Floridi a argues that while AI has been very successful as a means of augmenting our own intelligence, but as a branch of cognitive science interested in intelligence production, AI has been a dismal disappointment. The opposite opinion has also been argued and some claim that the Turing Test has already been passed or at least that programmers are on the verge of doing so. For instance it was reported by the BBC in that the Turing Test had been passed by a program that could convince the judges that it was a 13 year old Ukrainian boy, but even so, many experts remain skeptical BBC Yale professor David Gelernter worries that that there would be certain uncomfortable moral issues raised.

    Practice case studies about resolving ethical problems | Ethics | Technical | ICAEW

    Gelernter suggests that consciousness is a requirement for moral agency and that we may treat anything without it in any way that we want without moral regard. Sullins counters this argument by noting that consciousness is not required for moral agency. For instance, nonhuman animals and the other living and nonliving things in our environment must be accorded certain moral rights, and indeed, any Turing capable AI would also have moral duties as well as rights, regardless of its status as a conscious being Sullins AI is certainly capable of creating machines that can converse effectively in simple ways with with human beings as evidenced by Apple Siri, Amazon Alexa, OK Goolge, etc.

    Piper Alpha: Ethics Case Study No. 2

    But that may not matter when it comes to assessing the moral impact of these technologies. In addition, there are still many other applications that use AI technology. Nearly all of the information technologies we discussed above such as, search, computer games, data mining, malware filtering, robotics, etc.