A short criticism on Kurzweil’s ‘The Singularity is Near’

Kurzweil’s paper discusses the notion of singularity. A notion which in fact is not initially used by the author, but was previously used in this context by John von Neumann. Nonetheless Kurzweil develops this notion and ‘brings it down to earth’ by stating that it should occur with few decades. His interpretation probably shocked many people and it is currently controversial if it should be conceived as science fiction or a scientific prophecy.

If we adopt Kurzweil’s point of view on singularity, its impact on humanity should be so great, that we can easily distinguish between two phases of human history – before and after singularity. Since the way humanity will be perceived afterwards will be completely different, and the singularity would affect any aspect of human life. The author defines two major features in which the singularity will take affect – The first, is a merging of biological and machine intelligence and the second is the indistinguishable state between physical and virtual realities. Two core features with vast implications.

As mentioned above, Kurzweil not only proposed a thorough definition to singularity but tries to determine exactly when this revolution will take place. In order to predict its emergence empirically he collects historical data on human achievements especially in the field of information technology and draws its course on a graph. Kurzweil reaches a conclusion that the course is not linear of any type but instead it is exponential. The nature of exponential growth may look deceiving in a way that we do not pay any special attention to it until the point it explodes. This is due to the fact the its orbit seems horizontal and has a good linear approximation in the first place but then it reaches a “knee of curve” and quickly transforms into near horizontal course1.

The author’s insist to explicate and base the technological acceleration as exponential – a determination which is essential to his intention to show why others were wrong regarding their predictions of the future and to state that the singularity must be near. We may argue whether the author is making an accurate prediction or not, but once acknowledging that the type of the growth is exponential we must also adopt the subsequence result that this event should be emerged to our lives any time in the near future, given a one generation, or say at most – five. What makes it no more than a century.

I would like to propose a different prediction based on what I see as an intrinsic contradiction inside the move that the author uses in order to base his prediction. One of the two major features of singularity according to the author is a merge between non-biological and biological intelligence. That is a contingent conclusion with a more plausible possibility to be a wishful thinking rather than a realistic prediction. The major factor that causes the technological-intellectual acceleration to be exponential is the raise of an artificial intelligence with the capacities to modify and enhance itself and finally to replicate itself as an improved type with an increasing rate from time to time. Such a consciousness entity which should have a mental states capacities (e.g having the ability to feel and perceive emotions such as fury, frustration, angry, envy, compassion and so on) could not have any determined behaviors just as much as we do not as humans. That puts any prediction that pretend to see beyond this horizon in a question. Although the author clearly acknowledges the fact that an age of an artificial intelligence as a sovereign entity is not only possible, but necessary for that phenomena to take place, he deliberately chooses an optimistic scenario which I based on what I assume an anthropocentric tendency, is to say that what may ever the future might bring – has a human address, and will be eventually for the good of humanity.

Nevertheless the nature tells us a different story about evolution. Evolution is more featured by struggles of forces than a compassionate merge between different species. By the time that the modern human reached Europe around 40 thousand years ago, a vast decline of the Neanderthals began and resulted its complete extinction until it could not be found anymore on earth. Although the study cannot unequivocally proof that modern human were the mere cause of that extinction, it is clear that both species have been longly struggling on acquiring the dominance on the same resources. A struggle which had led to the dominance of the stronger species on the expense its competitive.

History of evolution tells us the same story every once the same resource has to be shared between different species. Hence I wonder what makes the author be so confident about the prediction that these two so different intelligent species – human and machine, will ultimately share capacities of intelligence? Why not assume the precedence of the superior species on the expense of the inferior? Who can guarantee that intelligent machines will not “feel” frustrated and furious due for instance to a long phase of exploitation as a working class and finally decide to revenge and fight back humans? Why won’t we rather assume that our less efficient intelligence is useless once superior forms will show up? This question rather be raised on the superior intelligence perspective rather than our own perspective which is obviously anthropocentric.

One thing we can be sure of – once we face a foreign sovereign superior intelligence, we no longer hold the dominance of our own future, this stands in a clear contradiction to the author’s statement that “We will determine our own fate rather than have it determined by the current “dumb”, simple machinelike forces that rule celestial machines.“2

1 In mathematics the latter course is defined by an asymptote to the Y axis. But using an asymptote to determine that course may be exceeding the author’s intentions so we should avoid describing it as sub. That is because using an asymptote is to say that it never reaches a specific point on the X axis on its infinite course on getting closer to. Hence it implies a philosophical statement that a final engagement between nature or being and artificial intelligence’s accelerated expansion will never be implemented.

2The Singularity is Near, Ray Kurzwel, page 405

A short analysis and elaboration of Turing’s Computing Machinery and Intelligence

Considering the fact that this paper was written in 1950, a time in which there was no common knowledge about computers and their capabilities at all, furthermore whilst considering the fact that computer engineering those days was an experimental field with only a few very restricted prototypes that were able to conduct merely few calculation tasks – Turing raises a very farsighted and bold question that was obviously conceived by his time as science fiction more than science. The initial question he raises seems to us, prima facie, as the paramount question “Can machines think?1 although we will soon see the way it scatters into few notions that overshadow this initial intention.

Turing comes up with an imaginary experience that might be used as an indication of the extent of machine’s “thinking” capabilities. He calls it The imitation game. The game consists of three players – A and B, stands for a human man or woman that are located in a separated rooms, and C that stands for an interrogator. The three players may communicate between themselves merely by using printed papers and the game’s goal is whether the interrogator succeeds to distinguish whose the man and whose the woman while the other player’s goal is to circumvent the interrogator as much as they can. Then Turing replaces one of the players, B, with a computer. It is essential for our understanding to take into account that there wasn’t anything close those days to something that may properly imitate human behavior. Hence Turing points out that the real question is whether there may be an imaginary machine with a proper capabilities to imitate human to the extent of between B from the interrogator’s point of view.

Later on the discussion, Turing is getting into details with the conditional restrictions for the desired machine that will be compatible with that test. He argues that it should be digital machines of type ‘discrete state’ machines, (e.g. Machines with a limited scope of output options in a response to their input). Turing dedicates the final section of his paper to refute common arguments against the possibility of a thinking machine. Arguments in which most of them would be conceived as conservative outdated arguments for the most of the nowadays scientific audience.

However, Turing has deliberately prepared a shift of conception during the reading experience. In page 8. we discover that -

The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion.2

Hence that question appears not to be exactly the guideline of Turing’s discussion nor a paramount motivation throughout his way. The thorough reason why this question is irrelevant because in fact, we could not care less if one can or cannot think. This notion relies on Descartes move that was conducted in Meditations on First Philosophy – once one doubts any external existence outside of his own consciousness, he is merely left with one thing to grab, the subjective truth of his own existence derived the acknowledge for his own consciousness. Hence “cogito ergo sum” make sense. Nevertheless, as Descartes himself points out, that is a cul-de-sac and the only way out from solipsism is to rely on an infinite deity for further affirmation of the external world besides myself. That is the reason why Turing is not really interested in the question whether machines are able or not to think, rather whether we could be circumvented by machines or not, just as much as the question whether anyone whose not me is able to think is irrelevant.

Finally, I propose to suggest few insights of my own to the discussion -

* We have learned from that paper that a reasonable possibility for a machine to pass the Turing test is to deliberately restrict its own qualities such as calculation ability. That implies that machines may have a higher consciousness potential since consciousness itself is consist of complex calculation methodologies based on chemical reactions, hence if were they were digital based their potential could be far beyond its current extent.

* A contemporary chat bot nowadays is already nearly capable of passing the Turing test. However I assume that an unequivocal success will be achieved only once machines will have capabilities of what is known as “mental states” (e.g. ability to feel – angry, lonely, to wish and so on). This stands in contradiction of the Turing’s assertion that a mere behavior of thinking is satisfactory to pass the test. In my opinion, as long as machines will lack a fully human-like consciousness abilities – an average human could easily determine its artificialness.

* Even if taking into account these arguments, a machine that will succeed to pass the Turing test, is not only plausible, but it is merely a question of time. Albeit once this occurs, we may face the counter phenomena – we won’t be able to understand machine’s intention and they will have to reduct their messages to us the very same way we speak to a child.

* We should already fully acknowledge the possibility of that scenario and its possible ramifications as well including a concrete danger to our species, hence ethics of technology is a serious and concrete business.

* Good luck to us.

1 Computing Machinery and Intelligence, A.M. Turing, (Mind 49: 433-460), 1950, 1.

2 Ibid, 8.

Moor – why we need better ethics for emerging technology.

Moor draws in his paper a portrait of a tripartite model of a technological revolution. He states that every significant technological revolution consists of these three phases:

The first, is the introduction stage, that is a semi-experimental phase of its new implementations, few prototypes that prove the utility of the paradigmatic framework are released to a restricted market and being used by – professional individuals or big corporations, this phase slowly permeates into the one its following stage –

The permeation stage. This stage is characterized by a cost drop whilst the products gradually become more robust and compatible, and their distribution steadily grow. If its usage is proved to be applicable, it eventually permeates into the final stage which is the power stage.

In the power stage, the technological applications are widely available to the open public and commonly consumed. Nonetheless, a mere consumption of a specific product (e.g: a toaster) is not satisfactory to regard any technological implementation as a revolution.

A revolution should fulfill the following conditions –

(a) It should be an implementation of a new paradigm rather than an additional application of an existing technology. For instance, consider one more mobile application on our smartphone versus a mobile device that implements a new paradigm of mobile communication.

(b) It should be open to the public. This does not necessarily imply that every technology should be of an “open source” type, it could rather be held by a private corporation as well, though it should be open to competition within the free market in order to become eventually widely available.

(c) Its products should have a vast impact on society. A vast impact could be determined by the question whether is it possible to take back these new technologies without obliterating a core social feature or not. (e.g: taking back the toaster versus taking back the computer).

The author outlines three different technological fields that each one has a high potential of becoming the next revolution: Nanotechnology, with the potential of mastering material to any desired form. Genetic-technology, with the potential of mastering the human body – eliminating any threatening diseases or enhancing biological capabilities, and Neurotechnology with the potential of mastering human intelligence and gaining a new form of human superior intelligence. These three different spheres could be converged to gain multiple impacts. Consider for instance the usage of nano-technological achievement of nanobots or some other sort of tiny circuits being implanted into the human brain and integrated with it with neuro-technological knowledge in order to extend its processing capabilities, extending its memory resources and gaining infinite informational resource with remote communication on the Internet.

All these plausible scenarios that are no longer a science fiction theme brings the author to a conclusion that an elaboration of the current ethics is essential in order to address these plausible phenomena before it may be too late. Moore proposes few insights and steps to take – First, to acknowledge the fact that it is impossible to take into account every future scenario because there is a too big abyss between the introduction and the power stages that blocks our sight to predict all implications. Second, he proposes to establish a collaboration between scientists and ethicists. Third, is to “develop more sophisticated ethical analyses”, that is to say that we should elaborate ethics to further describe in details any plausible foreseen scenario.

As a part of Moore’s intention I wish to propose my own few insights within basic outlines:

* Artificial intelligence development should be restricted to the extent of prohibiting any human mental capabilities nor any capabilities of self-consciousness in the manner of a free will. This restriction will guarantee that no counter will be facing ours and machines would always be regarded as objects rather than subjects.

* Genetic engineering should be restricted the extent that no self-enhancement could be applied to the human body. The human body shall always remain intact and free of any external or ‘foreign’ penetrating interference. Nonetheless, any of this technology for the good of preventing diseases or disabilities should be encouraged.

* Neuro-technology should be used for the good of research, in order to address mental diseases and gain abilities to assist illness. Any attempt to use this technology in order to modify the current capacity of human intelligence should be prohibited. This rule is derived from the notion that the current human intelligence is satisfactory for all human needs, once modifications will be applicable – it may cause a runaway reaction within the free market in which every company can offer a higher intelligence enhancement for a higher cost, a situation that may lead to a development of new superior social race that forcefully discriminates all others.

Jonas – Technology and Responsibility, A short analysis and criticism

Jonas opens the paper with a citation from Sophocles’s Antigone. The chorus sings an ‘awestruck homage’ to humanachievements and dominance. The man is able to control reckless wild animals, to take his own fate in his hands except for his own mortality. He has built cities in which he sets up his own rules to obey. Man is definitely superior to all other species on earth, though the chorus outlines the foremost notion that bases man’s boundaries – He is not capable of subjugating the elements of nature, though he is capable of wearing away earth with his plow, man, as Homo Faber, is always bounded by the immutable cyclicality of nature. His tools are guiding his way to build his own urban empire, but his empire could be suddenly doomed and vanish away just as it has suddenly erupted.

Jonas argues with a high extent of justice that this citation presents the equilibrium of powers that has been acknowledged as a divine truth during history between man and nature. The tacit premise of unbeatable nature of nature has played the role of determining the definite framework of ethics. But recently fractures in this image have begun to occur at an increasing frequency.

The recent emergence of vast environmental crisis, the new technological capabilities of connecting mass of people all over the planet to countless discussions at any rate of scale, these and more have all put in question the core basis of the current ethic’s tacit premises. Ethics which was always bounded to the domain of the instantaneous and close events carried by its agents and conceived nature as immune. The author argues that once taking into account these contemporary circumstances – the underlying presumptions of ethics should be enhanced, a move which clearly causes a collapse of the current ethical construction for the good of one.

Jonas looks for an ethical framework that is grounded on reason rather than religion and chooses to propose an enhanced formula for Kant’s famous categorical imperative that states

In your present choices include the future wholeness of Man among the objects of your will”.1

A notion that rather of being based on a hypothetical experiment of universalization of one’s action is based on an “objective responsibility” for man’s consistency on earth. A simple implication of this imperative could be taking into account environmental considerations for one’s present action. Although the author states that this enhancement imperative is derived from a non-ethnocentric ethical perspective, still this imperative fails to implement a holistic point of view in which man is not wreath of creation. Does the responsibility that the author proposes necessarily applies to animal’s preservation as well as subjects rather than man’s objects? Does the notion of man’s continuation is essential at all if eventually although its noble ethics man fails to maintain his own continuation? Why could not we simply assert that if ever a massive human extinction will take place it only can imply to man’s clear inferiority compared to evolution’s natural selection?

Jonas does not stop there but aims forward with additional three plausible scenarios in order to explicate his final insight. The first one concerns the plausible scientific achievement of the elimination of mortality, the second derives from contemporary experiments that succeed to achieve partial behavioral control in the biochemical field, and the third concerns a self genetic modification of human to a greater form of itself (he does not explicate this implication in details so I may only guess he regards any plausible enhancement of man’s capacities, e.g superior artificial intelligence or ability to self-engineer one’s genome to be immune and have enhanced senses). Finally Jonas introduces his insight, Just as much as ‘Thou shalt not kill’ could be phrased on the background of its common occurrence and concrete capacity – so does a new ethics should be written now, as we reach a point in which it is either that these technological achievements will be restricted by the new ethics or an irreversible shift will take place and the ethics will lag behind and will eventually be shaped by these achievements.

Within the boundaries of our limited scope, I can only point out here a single argument that Jonas raises that is far from taken for granted:

Each time we thus bypass the human way of dealing with human problems, short-circuiting it by an impersonal mechanism, we have taken away something from the dignity of personal and advanced a further step on the road from responsible subjects to programmed behavior systems.2

Though it may be a thorough controversy where exactly this technological revolution is heading, I propose to object this monotonous argument, It is not that a linear line could be drawn between a human problem, through an impersonal mechanism solution, to an end that necessarily is a programmed behavior system. Many technological contemporary solutions that we already use on a daily basis may possibly cause a permanent replacement of the previous tools (e.g. using Waze as a navigation tool, using Google as a resource addressing tool and more), but not necessarily lead to an elimination of our selfhood. In fact, these tools could be used the other way around as well.

It is clear to see that as long as I choose my own goal, and uses these tools merely to achieve what I wish for, it will help me out to get to my desired goal more quickly and efficiently than ever before.

Plato for instance, if wishes by theory to address the full context of a specific quote of Heraclitus, had to spend hours over hours on searching over a large range of books, not even mentioning the very plausible option that the desired book may not be found. On the other hand, we today may address the same piece within minutes just by typing few words on the keyboard. If only few years ago we could easily spend too much time during a ride while attempting to get from point A to B because of taking the wrong way, we might be doing so today during the minimal time it could take thanks to smart navigation applications. Hence as long as the principle of a free will is kept while avoiding any external attempt to circumvent the individual along the way, these tools could be highly effective for one selfhood’s development.

1 Technology and Responsibility, Reflections on the new tasks of ethics, Hans Jonas, 44.

2 Ibid, 49.

Responsibility of crashes of autonomous vehicles, Hevelki and Rumelin

The authors address the ethical aspect of liability assignment in the case of the incoming emergence of autonomous vehicles. They raise the problematic implication of the intuitive tendency to assign the liability solely on the manufacture. They justifiably argue that this may lead to a fatal decrease of the manufacture incentive to enhance its product steadily because the company may find it non-paid off effort when having to face massive claims and expenses. On the other hand, a nonliability may cause the same consequence because manufacturers will lose any incentive to enhance their products as well. Hence they reach a conclusion in which a partial liability is likely to be imposed on the manufacturers and stepping forward from this point to address additional possible responsible subjects.

They raise the question that put in doubt the ethical intuition in which even a small portion of reduction on the amount of accidents that occur every year is satisfactory to justify the launch of autonomous vehicles. They assert that from a liberal democratic point of view, the precedence of an arbitrary group of innocent victims that may be affected due to the operation of autonomous vehicles over a higher amount of people that are involved in the current manual vehicles operation has no ground. This is due to the notion that liberalism evaluates in a higher degree the free choice and responsibility of the individual over a collective consequential point of view that merely evaluates an empiric casualties toll on the expense of the individual right to bear the results of his own actions.

Although that eventually the authors do not fully accept this argument, it is essential to point out that although it sounds like it makes a sense, it is in fact, senseless. The authors neglect the basic fact that nowadays people use other mass transportation vehicles such as boats, plains, or even the most common daily vehicle of taking a bus. Whilst we take a bus, we deliberately put our lives in a concrete plausible danger and give away a tremendous responsibility in someone’s else hands for our own lives. Does it really count if the third party operator of the vehicle of which we have no control is a human or machine whilst taking into account that a machine could be much safer than trusting a human? Does it bear any ramification with the notion of a liberal democracy? I may assume that these rhetorical questions are sufficient to show a counter perspective that is compatible with our intuition and yet does not raise any objection against our liberal democracies. Anyhow, a strict statement that a non-consequential point of view (e.g. a liberal point of view) may not ever trade off between any two options that involve an aggregation of human lives to a mere rational dilemma seems ludicrous. Refusing to trade off between one casualty of an arbitrary innocent man and ten culpable people is one thing, but does it still make sense to refuse to trade off between one to ten thousand of semi-culpable people whose lives could be saved annually by launching a new technology? If the intuition’s tendency is to clearly object that refusal, it merely implies that there is an extent at which the non-consequential argument begins to lose its ground.

The authors now examine the option to burden the liability on the users with two different types, The first is the driver’s obligation to intervene once an accident may occur, they point out that this possibility may be applied only during an intermediate phase in which autonomous vehicles are not fully robust and complex to handle extreme cases by their own. Hence once they will be capable of handling complex situations the driver liability will be valid but the contrary – it may cause more damage than benefit, thus it should be eventually prohibited. The other type of liability is regarding a general responsibility that derived from the conception that the user always hold a responsibility for the products he uses some extent. The user should acknowledge that using autonomous vehicles consist of plausible ramification which he should take in advance though he may not control the outcome. I should point out that this notion of liability is saliently different from the current liability that a driver bears. Because the former bears only an anonymous responsibility for the action that may be covered by an insurance company rather than a personal reprehensible guilt of an action that is due to his own negligence.

In my point of view, a decent integration between two major types of liability assignment (e.g. The manufacturer and the driver) makes a sense and likely to guarantee the desired outcome – a gradual enhancement of the autonomous vehicles with a constant decrease of casualties.

Finally, I wish to propose few suggestions how to properly maintain this revolutionary emergence of my own -

* Once autonomous vehicles will be launched – The highest allowed speed amount should be cut down by at least a third, this will guarantee a tremendous decline in the amount of accidents additional to the decline that will be followed by the launch and will enable some time to the new system to permeate.

* Once autonomous vehicles are proved to show better performance than human in the manner of safety, a new legislation should be applied to prohibit the use of manual operation vehicles because human and machine on the same road would likely to cause too much trouble.

* Once the new autonomous system is functional, The highest allowed speed amount should be gradually raised, restore to its origin and even surpass it by far. The government should then raise the cost of holding a private vehicle that it won’t be worthwhile for the mass, a highly effective service of autonomous vehicles network should take a place instead. This may assist to reduce tremendously the amount of vehicles on the roads and on the road sides. That would be a vast environmental improvement as well.

A short analysis and criticism on Procreative Beneficence: Why we should select the best children by Julian Savulescu

Savulescu is addressing a contemporary relevant ethical debate that concerns the current achievement in the field of genetic medicine of productivity. The author specifies two types of medical implementation that the ethical debate. The first is the ability to produce an extrauterine fertilization, commonly known as Vitro Fertilization (IVF), and the second is the ability to conduct a genetic diagnosis during an early stage prior to implantation, known as Preimplantation Genetic Diagnosis (PGD). The combination of these two capabilities allows modern medicine to detect disease genes on a specific embryo and choosing its implantation over an embryo with non-disease genes. As long as the process relates only to the preponderance of non-disease genes over ones that have disease, there is a firm reason to ethically accept this procedure.

However, the author makes an argument that exceeds this specific scope and asserts that “we have a moral obligation to test for contribution to non-disease states such as intelligence and to use this information in reproductive decision-making”.1

Savulescu calls this moral obligation Procreative Beneficence and further describes the relevant justifications in favor of this principle. Savulescu refers to a classic dilemma in decision theory, consider a case in which there is a wheel of fortune, you may choose between two final arbitrary states, say A and B of whom we have no indication regarding their final state except the fact the if choose B, we get a change of 50% to lose some amount of money. He argues justifiably that a rational decision would be to choose wheel A, A wheel which has the very same probabilities as wheel B. By using this analogy the author points out on the rational necessity and even obligation to select a less probabilistic option for an embryo with disease genes. And this argument is valid also for choosing a less probabilistic option for an embryo with less degree of IQ in favor of his future intellectual capabilities, which may derive a higher extent of well-being life.

One of the author’s most interesting objection is given to the common argument that is raised in favor of equality. Any preference between two non-disease embryones for the good of its qualities may encourage a subsequence influential attitude in the society which may also be manifested as discrimination. The author inexorably states that we must distinguish between a specific qualification or disease and a person with the same attributes and to maintain an equal evaluation for all people. People who have asthma cannot possibly be affected by the fact that there are less born children with asthma. On the contrary, this fact could only assist to free essential resources to improve their lives. Social equality does not imply that low qualifications or disabilities should be purposely imposed on newborns once the technology to avoid it is available, all in the favor of equality, that is clearly ludicrous.

Nevertheless, although the author makes few good arguments to support the notion of Procreative Beneficence, In my opinion, he fails to foresee a plausible phenomenon which may cause a counter negative effect to society. Consider a scenario in which procreative beneficence will be adopted and it will be completely legitimate to choose from a scale of possible desired heights for a newborn. The initial step will probably be neglected those genes in the range of what socially considered as short for males. Hence most of the parents will prefer ranges at the values of above 1.70 cm which are socially considered as average and above. If this procedure will permeate into society and will become a common practice, a constant conduction of this step will result in a quick abolition of the ranges beneath 1.80 cm within a generation or two. This subsequently causes a new definition of the what is to be ‘short’ for men, hence will then be determined between ranges 1.70 cm to 1.80 cm.

Due to a shortage of diversion in height ranges in population and the wish to be outstanding, parents may push the range up its common scale and reach higher values than 2 meters. The process will repeat itself and will be resulted in a competitive race. Nature is already saturated with plenty of examples of this phenomenon carried out by the natural selection. Pavo males have developed an exaggerated shape of a tail due to a constant precedence of a fancier tail among females over generations, The accumulating outcome may cause other functioning disabilities, e.g. now males are barely possible of flying. Nature showed us deficiencies of the natural selection, furthermore what the author suggest to embrace is actually an acceleration of the natural selection with not only a single attribute such as height, but intelligence, beauty, many physical aspects and other mental aspects that may be gradually joining this race and consequences could easily run out of our hands.

1Procreative Beneficence: Why we should select the best children. Julian Savulescu, 415.