Across
- 2. (9,9) Describes technology platforms that are broadly based on the scientific disciplines of artificial intelligence (AI), which is the simulation of human thought processes in a computerised model. These systems learn and naturally interact with humans to extend what machines can do on their own e.g. data analytics from major healthcare research.
- 4. (10,5) In the context of this unit, minimising risks is how potentially negative social, moral and ethical implications of using cognitive computing can be minimised. An example would be ensuring that medical data (relating to patients) was protected and did not fall into the wrong hands.
- 6. Rules of conduct governed by professional and legal guidelines within a particular time and place.
Down
- 1. (7,14) This is taking into consideration how the action(s) taken by a person/persons or machinery can impact on another person or persons e.g. if it is the intention that a machine will carry out the role of a human, what is the impact that this could have on the human.
- 3. (5, 14) This is taking into consideration whether it would be morally acceptable to carry out an action. An example of this is GM (genetically modified) crops. Some people believe in them and find them morally acceptable; those against them say that it is not morally acceptable to interfere with nature. Morals Principles or habits in relation to right and wrong conduct.
- 5. (6,14) How something would impact on society e.g. implementing automated technology would make people redundant, but on the other hand new technology can actually create new jobs.
