The informal theinformationsuperhighway book club continues with Ted Chiang & # 39; s Dacey & # 39; s Patent Automatic Nanny
We are now on the fifth short story of nine in Ted Chiang & # 39; s collection of exhalations. This is very short on just a few pages, but despite its short length, it examines some of the most fundamental problems we face as a society today: technology, children, love, and the importance of connection as all these elements melt together. So far it has not been my favorite story, but it is certainly interesting, especially given the previous short story Lifecycle of Software Objects (if you missed it, you can read more analysis here).
A few more brief notes:
- Would you like to participate in the conversation? Please email me your thoughts at danny+bookclub@techcrunch.com or take part in some discussions on Reddit or Twitter.
- Follow these informal book club articles here: https://techcrunch.com/book-review/. This page also has an integrated RSS feed for articles only in the "Book Review" category, which has a very low volume.
- Feel free to add your comments in our theinformationsuperhighway comment section below this post.
Read Dacey's Patent Automatic Nanny
Chiang has constructed a creative frame device here: We are observing a strange machine – Dacey's Patent Automatic Nanny – in a historical retrospective as part of an exhibition entitled "Little Defective Adults – Attitudes towards Children 1700 to 1950". The whole story is essentially the museum sign next to the mechanical artifact that describes its background and describes how it was designed to raise a child without the need for a human nanny.
Similar to the last short story we read in the collection, the question of technology-mediated human connection is at the center of the story. Can we only raise a child with a piece of technology? Chiang finally seems to oppose such an idea and shows that the child's psychosocial development is hampered by its almost exclusive interaction with a non-human being. Right from the start, the author even played a kind of legend: The title of the exhibition “Little Defective Adults” could be applied to robots as well as to children from the Victorian era.
But like the digients in the last story, we later learn that the child at the center of the story actually has good interaction skills, but with robots instead of humans. When the automatic nanny is taken out of service by Lionel's child Edmund after two years of education, the child experiences a developmental disorder. Its development will be rekindled as soon as it has access to robots and other electronics. According to the story:
Within a few weeks, it turned out that Edmund was not cognitively delayed in the manner previously assumed; The staff simply lacked the appropriate means of communication with him.
So we stick to the most important questions from the last story: Should human-robot interactions be considered equivalent to human-human interactions? If a child prefers to interact with an electronic device rather than a human, is that just a sign that we privilege and appreciate certain interactions with others?
This question is explained in much greater detail in the life cycle of software objects, but remains an equally interesting question in our increasingly digital world. We're about to start a multi-part series on virtual worlds tomorrow (stay tuned), but ultimately all of these questions boil down to one basic question: what is real?
Outside of this topic (which deviates from philosophy and is not meditated deeply on the few pages of history here), there are, in my opinion, two other topics that are worth addressing. The first has to do with the variability of human experience. This whole experiment begins when Lionel's own father Reginald decides to replace a human nanny with a machine to offer his child a more unified environment ("It won't expose your child to any offensive influences"). In fact, he wants to clone not only this consistency for his own child, but also the automatic nanny for all children.
While Reginald believes that human nannies are broken, it is really the automatic nannies themselves who are impoverished. They lack the spontaneity and complexity of people who prevent the children in their care from dealing with a wider variety of situations and instead push them inside. Indeed, women (also known as mothers) understand this dynamic intuitively: “The inventor (Reginald) formulated his proposal as an invitation to participate in a large scientific company and was amazed that none of the women he courted found it appealing Prospect held. "
Yet human contact is exactly what drives the pursuit of these robots. The nanny’s original inventor, Reginald, uses it on his own son Lionel, who wants to prove its usefulness to the world by using it on his son Edmund. So we see a cross-generational pursuit of this dream, but this pursuit is driven by human passion to defend the work of one's parents and the legacy they leave behind. Human-to-human contact then becomes the main driver to prove that human-to-robot contact is just as effective, and exposes exactly the claim that is being considered. It's a nice piece of irony.
The other thread to unravel a bit is the scientific method and how far it can lead us astray. Reginald's development and marketing of the device is being undermined by the fact that he never really did real experiments on his own child to assess the quality of different nannies. He only makes assumptions based on his Victorian values and tirelessly pursues them before returning to pure math, an area where he can feel comfortable with his models of the universe.
In the middle of this little pattern is a common lesson: Sometimes the things that are the least measurable have the greatest impact on our lives. This story – like the exhibition from which it is portrayed – is a warning of hubris and failure to listen and love.
The truth of the facts, the truth of the feeling
Some questions to think about as you read the next short story: The truth of the facts, the truth of the feeling:
- What is truth What is honesty?
- How do the two frames – a historical one about the Tiv and the "contemporary" one about the Remem technology – work together to question what truth means?
- How important is it to get the details of a memory right? Does a convincing narrative override the need for accuracy?
- Do different cultures have different approaches to storytelling, storytelling and universal truth?
- Does the constant taking of photos and videos change our perception of the world? Are they fair representations of the truth?
- How important is it to forget? Memories should fade over time – is that fundamentally beneficial or harmful to humanity?
- Will we check each other more and more in the future? What would be the consequences of such a future?