Final Ethics Paper

Gabriel zapata

 

CST 373  – Ethics in Comm & Tech


Professor Scott

 

May 20, 2017

Cortana the AI

Microsoft has gotten into the industry of AI creation for public users, and have since used cortana as their ‘personality’ for the users. Cortana has been for the past few years the artificial intelligence of all microsoft devices up to this point and has been through thick and thin with Microsoft and it’s users. Cortana obviously has its benefits through data scraping, however not many may be aware of the harm it can cause through the backside of its inner workings. Going forward, Microsoft continues to implement more features, which requires a certain amount of data from users that have seen issues with this idea ethically (1). Some customers are not ok with this, as they shouldn’t be, however it may not be as bad as they think due to Microsoft’s policy. Whether this AI, Cortana, is a good idea to handle users’ data, the implementation and usage of her can be used to help out people in a way that calendars and notebooks simply cannot do and is the way of the future, but implementing and handling that data is an issue for many groups of people, privacy wise and ethically, that can be solved through compromise.

To clearly define how it can be handled, the implementation of Cortana into the work environment and lives of its users, it must be clear what the intentions are of the company with the idea of managing the data for the various features. Before that is discussed, the lines must be drawn of who exactly are invested in this type of technology, and the aspects of their reasons for being apart of it. First and foremost is the company creating this software, Microsoft. Their idea was to bring and incorporate an AI into the lives of the customers in which it brings seamless aid to the users in their everyday lives that will provide invaluable features in numerous aspects of their lives. Not only is Cortana used in phones, but across all types of devices that Microsoft has to offer which makes her even more susceptible to threats in various ways. Microsoft also manages all the data going in and out of the devices including Cortana thus making the safety of that data top priority. The other stakeholder in this case is the government agency, more specifically the NSA and other various agencies such as the CIA that seek data of the country’s citizens and even non-citizens. Their reasoning in being apart of this circle of having value in Cortana is if they are able to get their hands on the data of the users which may provide value to them in getting information about certain people, however this leaves it open to major issues including blackmail, ransoming, and even treason (2). This of course is a what if at the moment, but there is always a possibility that it may become reality based on changing laws which the decisions must be looked over carefully. Lastly, the consumer, the customer, the users, are the people who will benefit from this directly, will see and get the most out of this technology through its plethora of features, and also are at the greatest risk for damage to their lives unless the ethics and policies of it are handled properly through careful regulation and management. For this to occur though, the ethics must be clear and the wrongs must be transparent so that it is avoided.

Cortana’s main purpose is to help people as quickly and efficiently as possible while giving them the maximum amount of time to focus on other more important and tending aspects of their everyday lives (3). To do this however, Microsoft must be able to get data and information from the users in a manner that allows the AI to take over tedious tasks such as scheduling, timing, question answering, seamless interactions between applications on the device and many more nontrivial abilities (3). To do this requires an excess amount of data from the user, that the user may not even know about unless they have read the entire policy and rules of the software. What is stated within that document is a clause that the data may be stolen since nothing is foolproof safe on all of Microsoft devices, in which case, the data and information can be used against the user depending on the type of information or data (13). An example is that the thieves or possible hackers can blackmail you depending what email or message they see that can be detrimental to you in your life and you can’t do anything about it or risk getting the information leaked by the thief. This is a huge sticky situation in which the company tries its best to stop this scenario from occurring but alas not successful. This ethical issue of is it even worth it is a question still being asked by the company today and the answer is the benefits outweigh the pros significantly, especially since the data is near impossible to steal. However, the thieves and hackers are not the only entity the users should be worried about, but the government agencies like the NSA or CIA (5). This has become a relevant entity that brings fear to tech companies such as Microsoft due to them forcefully taking the data anyway they can to “apprehend” criminals off of possible leads, which can be seen as a sign of abuse of power (6). This has become a huge issue because the FBI was trying its hardest in 2015 to get Apple to unlock the phone for a terrorist’s iPhone because they had leads and evidence. The issue with this is if Apple allowed it, it would give the FBI to unlock any iPhone regardless of whose it was, criminal or not (6). This is an ethical issue on its own but I digress. The issue regarding the privacy of the users have possible solutions, though it may be difficult to figure out through all the different entities involved. In this situation there are a couple plausible solutions all with their own benefits and fallings with different ethical ideals at the foundation of them but implementing them individually may be difficult, if not impossible to an extent.

Through understanding the problem of how to approach this issue of ethics and privacy, it becomes increasingly difficult on how to go about solving this issue though there are solutions. For example, one approach that I have came up with would be to simply cut off the data and not allow Microsoft to take data from the user unless the user manually allows what data gets sent to the AI for use throughout the software (10). This gives the Rights Approach ethical framework to the solution in that it protects the ethical rights of the people affected by the cause or action, in this case privacy rights that the users deserve (11). On the flip side, another solution would be to give the company strict instructions on how to distribute the technology and require them to warn the users beforehand. This solution I believe is the most optimal in providing the ethical idea of the Utilitarianism Approach inhibiting the ability to be for the greater good for most of the people affected by the privacy issues of Cortana (12). However, for the Rights Approach solution, not allowing Microsoft to take any data from the users without their specific consent would require a lot more compared to that of the utilitarianism Approach solution.

The solution of not allowing the Company in general to take the data from the user’s of Cortana in its everyday functionality would completely hinder any entity from trying to hack into said device, rendering all privacy issues null. This is extremely simple and easy but it also means Cortana’s core functionality is also rendered null, thus making her useless which would not work in Microsoft’s model (9). A work around would be to be completely transparent at all times and allow the user to check off on all data getting sent in and out of Cortana thus giving the users full control of their data and letting them control their risks. This would allow the application of Cortana to be useful and still a viable product for users to use. The drawback, of course, is that it is much slower in performance based on the how much the user has to interact with it now and takes time away from other more pressing tasks to the users (9). Due to this issue, I believe this is simply not the best course of action thus leading me to believe that the ideal solution is unanimously the Utilitarian Approach in which the technology of Cortana maximizes the well-being of people that utilize the software providing the best solution in protecting user’s privacy with minimal possibility for intrusion.

In restricting the company and it’s developers in allowing how it handles its data is an excellent first step, which they as a company are already doing; however that’s not the only thing that must occur. In getting this solution to work without risk of data theft or hackers using people’s data for blackmail, an extra step should be implemented to have that extra layer of defense. I am talking about storing all valuable and important data  on the device itself, which would make it difficult for the intruders to see the actual data because it would not be that possible to view the data, especially without connection to that data. With this in place, the data can be safely kept away from all possible entities that could do harm to any user of Cortana (6). This idea comes with a slight negative in operation of developers however, because not only would they have to change the software, but restrict themselves in how they store data and operate Cortana’s internals. This idea is reminiscent of the ethical framework, the Utilitarian Approach in that it provides the maximum amount of maximizing the well-being of the people who use these devices. With this idea put into place, it is evident that people would be inclined to not be in risk from the obvious criminals, but there is still the law abiding ones, the government agencies who can use these types of technology, regardless of what type of security walls are put into place if they can get policies put in place allowing them access (8). However this is simply not possible because like as explained earlier, the FBI was not successful in getting access to all iPhones, only the one terrorists phone. This means that Microsoft and other device companies that sell to the public must be careful in keeping their customers data protected from these possible abuse of power government agencies. However, this is highly unlikely due to existing laws prohibiting them from doing so, as of now. With this idea, of continuing as long as data storage gets set to local, and the customers get transparent feedback from their data, this would indeed be the optimal solution and ethical approach in protecting all customer’s privacy from dangerous entities.

The usefulness of Cortana is unlimited to the users in their everyday lives and can do so much more than was possible not so long ago, but the dangers of this type of free technology can lead to very consequential situations. Though through careful thought, the different solutions become evident in approaching this difficult issue of data privacy. In protecting the greatest amount of users possible from bad entities such as hackers and agencies from getting user’s data, this solution with the foundation of Utilitarian approach is invaluable to Microsoft and it’s customers in protecting the people’s privacy.

 

(1) Jones, B. “Is Cortana a Dangerous Step towards Artificial Intelligence?” Digital Trends. Web. 14 Feb. 2015.

(2) Brandon, J. “How Microsoft Cortana will run your entire office by 2020” ComputerWorld. Web. 30 Sept. 2016.

(3) Chacos, B. “Microsoft envisions a future where Cortana and a legion of smart bots act as our butlers” PC World. Web. 30 Mar. 2016.

(4) Souppouris, A. “Microsoft hopes Cortana will lead an army of chatbots to victory” Engadget. Web. 30 Mar. 2016.

(5) Molen, B. “Her name is Cortana. Her attitude is almost human.” Engadget. Web. 4 June. 2014.

(6) Chessen, M. “The AI Policy Landscape” A Medium Corp. Web. 30 Mar. 2017.

(7) Chessen, M. “A Tale from the End of Humanity” A Medium Corp. Web. 15 Nov. 2016

(8) Dignan, L. “Can AI really be Ethical and Unbiased?” SDNet. Web. 16 Oct. 2016.

(9) Singer, P. “Can Artificial Intelligence Be Ethical?” Project Syndicate. Web. 12 Apr. 2016.

(10) Yudkowsky, E. “Complex Value Systems are Required to Realize Valuable Futures” Intelligence. Web. 2011.

(11) Monal, Patil. “Artificial Intelligence Vs Humans Will AI Surpass Humans” Academia. Web.

(12) Alhadeff, E. “The Smartest AI in the Universe Is More Human Than You Think” Microsoft. Web. 19 Jan. 2015.

(13)Al-Riyami, F. “The future of Cortana is intelligent, emotional, and potentially dangerous?” ONMSFT. Web. 2014.

(14) “AI Ethics and the Future of Humanity” Sparks & Honey. Web. 3 Nov. 2016

13. AgTech Saves Earth

So how is AgTech going to save the Earth?

After having considered all the challenges—change in demand, climate change, increasing costs, and decreasing reliability—some might be doubtful. I think it’s too early to start doubting, there are so many more opportunities flourishing along with AgTech. One thing we have to look forward to as stated in the article below, “We can repurpose the technologies we’ve been developing to solve other problems and put them to work for agriculture.”

Technology is the answer now and will continue to be. There’s no where to go from here but up. The necessary hardware is becoming more inexpensive as it evolves and advances. In addition to the hardware, “cloud computing and machine learning allow us to use this massive amount of data to make smarter real-time decisions that improve yield and reduce costs.” There is a lot of software and hardware being developed and created as a result of AgTech and as a result, we are going to see a lot more jobs in areas of agriculture and engineering.

Since AgTech is still very new, there is so much room for creativity and chances are, we are going to be seeing so many new things that have never been done before. It’s a time for firsts again in the field and it’s amazing. The agriculture industry will be moving forward and able to help farmers improve on their management and crop yield as a result of precision agriculture.

Not only will we be seeing improvements in AgTech, we will also see changes in computational biology as we start to explore biofabricated meats and leathers.

AgTech is changing the world and the opportunities are endless!

https://techcrunch.com/2015/09/07/the-next-food-frontier-how-agtech-can-save-the-world/

Science gone wrong (9)

chagnon.jpeg

I recently read a book in my cultural anthropology course that really turned me off. I have high regards for the sciences and treasure the scientific method for multiple reasons. In this reading, however, I was not only angry but saddened that a line of work was carried out in the name of science.

The story was written about a cultural anthropologist, Naploeon Chagnon, who visited the amazon to study a tribe called the Yanomamies. The Yanomamies are a primitive and indigenous tribe. Chagnon went and began to take their blood samples, and even inject them with plutonium and other materials. Though he may have justified his reasoning to choose them as a control group, I was totally disturbed by his going into a culture and completely playing his American card wrongly.

There needs to be a line drawn for anthropologists and any researcher for that matter. One should not be able to pull a science card to justify the harm of an other human being. This didn’t even happen to long ago, If I’m not mistaken, he still exists and is a professor. He was commissioned to go to field work for a researcher, Neel, who was a part of a different scientific initiative. Hopefully, this publically told story will help future researchers not use their sophisticated methods to contaminate a population.

One’s culture should not be taken or tested ever, especially for scientific exploratory reasons. This is not necessarily a tech related ethical dilemma, but the same mindset of superiority used for the harm of a less sophisticated tribe that we see unfold in Darkness in El Dorado can still be prevalent in the workplace.

It may not be as physical as taking blood, or injecting plutonium, but it could lead one to be verbally aggressive, and abruptly interrupt the existing culture.

https://savageminds.org/tag/napoleon-chagnon/

Open Source (10)

opensource-1.jpg

As data becomes more and more abundant, so does yield for some farmers. At face value, it may seem strange to think that data is directly associated with more food and profit, but it is. As large farms collect data on their farms, they are able to use this information to give them insights to their soil, crop, assets, and workers. This is a great idea but creates a wide gap for small versus large farmers.

To amend this wide gap, such a thing exists as open source. If large farmers buy into the open source community as a means to share their data, then small farmers are able to benefit. I am in favor of open source, and naturally in favor of open source data. I do however, understand the initial pushback that exists.

I worked on a data-driven project for an AgTech start up and was unable to see all of their data. There seems to be a concern amongst AgTech professionals over proprietary data. As idealistic as the notion of open source data sounds, I certainly do not think it will be well received. I think that after AgTech companies become more established, then there will be less resistant to sharing their data freely.

I say this because I was unable to access the companies data in fear of business advantages. As much as I would want to help a small farmer get more data, I also understand the risks associated with smaller companies sharing data that they believe will make their business unique.

A good compromise might be for AgTech companies to share more trivial data. Though a company could simulate data, it is much better to have real data, even though it might be trivial.

https://www.linkedin.com/pulse/big-data-small-business-ethics-agriculture-pedro-s-sarmento

Tax on knowledge (7)

algo.png

Should top performing tech companies be required to share the wealth of information. Just as companies are taxed monetarily, should something similar exist for knowledge? This ideal may be better received in an American context, but not so much in somewhere, like say Colombia.

I met a Colombian woman recently whose husband’s cousin is wealthy. He has enough money for 5 generations to maintain the standard of living that he lives but wants to continue to get money for 10 generations. She asked him if he’d ever consider donating to charity, and he said no. Most people, apparently, do not see the value in philanthropy. In the US, we have a leading philanthropic company like Salesforce who built a 1-1-1 philanthropic model where Salesforce gives away 1 percent of their time, product, and equity. As someone who values charity, this is business done right.

An idea that I’ve seen some companies start are engineering in residence programs, much like Facebook’s involvement at CSUMB. The idea is novel and should be adopted by more top performing companies. I think it makes global sense for computer science specifically since there is a high demand for talent, but a disproportionate amount of qualified people. If we take a utilitarian approach to this, the numbers make sense. The idea is simple: top performing companies should be expected to share a fraction of their time at local schools, be it elementary, middle school, high school, or college.

This might sound a bit socialist, or something, but to me, if we took a tax scheme approach, more amounts of people would be able to benefit. Taxes pay for roads, and knowledge teaches people how to creates those roads. In principle, this may raise some ethical concerns, but they would be the same ones we have about taxes.

 

Source: I kind of just thought it was interesting. I am going to source myself!

Design for non-native speakers (16)

language

Last weekend, I hosted Salinas’s first overnight hackathon at the National Steinbeck Center in oldtown Salinas. It was a neat format, and AgTech themed. The spirit of the event was one of understanding different worlds of thought to develop a solution to a real-world problem. We had agricultural industry leaders like Lorrie Koster and Joe Pezzini, along with many local farmers and AgTech professionals. Day one started with a keynote by Lorrie Koster on industry priorities and was followed up by farmers and AgTech professionals pitching their ideas to inspire project ideas to participants. One farmer, Jackie Vazquez from Sundance communicated the importance of a need for simplicity in design for farm workers, namely indigenous folk that hardly even speak Spanish, but rather a dialect.

This makes me wonder about a huge workforce that goes unnoticed when designing software or anything for that matter. Is it okay for applications that are designed and created to be used by people that are not comfortable with English be acceptable? I don’t know if I could say it’s unethical, but a part of me thinks that if a company is at a point where they have the labor power to create and design with the intended user in mind, they should be responsible for designing to the needs of the user.

Not doing so creates a barrier between the intended user that could be eliminated. On the spectrum of right and wrong, I would say it’s more wrong to not design to those that are not native to the dominate language. This isn’t just here, but universally. I am putting myself in a situation, and I would feel less empowered to do my job if I was unable to operate a tool that was created for me to interact with.

Instead of having non-native speakers blindly memorize digital paths to the correct screen, companies need to keep the user in mind while not only designing but also in providing language preferences. Not doing so needs to been seen as unjust.

 

Source: https://www.facebook.com/herScriptCS/videos/1308327032554455/

https://www.facebook.com/plugins/video.php?href=https%3A%2F%2Fwww.facebook.com%2FherScriptCS%2Fvideos%2F1308327032554455%2F&show_text=0&width=400

Unpaid internships? Right or wrong? (15)

Hire me office desk

In my educational experience here at CSUMB, the subject of internships has come up various times. As a matter of fact, it was a central theme in my experience in the CSin3 program. Every Friday, for nearly all of my time here I heard about internships. How to prepared for them, how to search for the best one, virtually everything there is to know about internships, I was taught. As a computer science student at CSUMB, I have held three internships.

In a job market that requires experience 5 years of experience with a language that was developed only 3 years ago, the hunt can feel a bit overwhelming, and the position as an intern attractive. My first internship was at a college under my computer science professor, Dr. Sonia Arteaga. I worked as a computer science research intern where I developed an android application that detected diseases on apples. The second internship I got was for a local start-up company called HeavyConnect. After my internship, I had the option of interning at Taylor Farms in improving their image processing algorithm. Instead, I thought it more fitting and exciting to work for a start-up, so I joined HeavyConnect to work in their innovation wing, developing products that were not core to their business model, but could potentially be useful after they had been validated by the market. The last internship I had was at Salesforce, the fourth largest Software company in the world.

I’d say that my road to Salesforce relied heavily on my previous experiences at both my college and at HeavyConnect. All of my internships were paid, with the exception of a fraction of one. I read a post on Hackathon Hackers asking people about unpaid internships, and I responded with my experience in mind. During my time at HeavyConnect, their funds ran out, and were clear with everyone about it and gave us a choice to leave or to stay. I decided to stay to improve my skills, however, it is worth noting that I was only there for about 3 hours a week and was working on a product that was proof of concept, so my work was not illegal.

Some might argue, and as a matter of fact did argue that it wasn’t right for me to be working as an unpaid engineer. Though I wouldn’t give someone advice to do what I did, I do see the value and can speak to the benefits I received from my volunteered time. If someone wishes to do the same in pursuit of skill-building, I do not think that it is an unethical service.

I know that under certain conditions it is illegal, but if someone can negotiate and work out a student experience like I did, I think that it would be worth it and should not be frowned upon. Again, in such a competitive environment and a school that isn’t highly recognized, the only thing that can take one to the top is sheer work ethic and skill.

 

https://www.facebook.com/groups/hackathonhackers/

Screen Shot 2017-05-07 at 8.13.57 PM