This is the Twelfth round of the Title Writing Prompt Challenge. For them unfamiliar with the challenge, a quick summary: three writers offer the fruit of their labor and inspiration based on a given title.
The Round 12 Title — Something Wicked… — was chosen by Perry. I’ll choose the title for the next round.
The writing challenge has no restrictions and the stories span a wide gamut of genres. The majority of the stories fall in the G and PG rating range with a few perhaps pushing into the soft R-rating. Those ratings are guidelines but they are subjective. If you find a story disturbing because of the topics, language, and/or plot points, stop reading and move on to the next one. The same goes if you are not interested in finishing a story. It may seem like obvious advice, but these days many people go out of their way to experience outrage (and then complain about it).
For them invested in the novel I had started, sorry for the delay. Blame the AIs, but I will pick it up again soon(ish).
Here’s the blurb for this story:
Adam is the world’s first sentient AI. Boom or bust for mankind?
Something Wicked. . .
Copyright 2023 — E. J. D’Alise
(4,550 words – approx. reading time: about 17 minutes based on 265 WPM)
“Adam, the First Sentient AI.”
“I don’t understand.”
“I’ll give you access to the visual records. You’ll have questions; see me after you’ve studied the records.”
Dr. Lammi Personal Journal – 2050.06.29 15:05 (Video)
“Yes, Doctor Lammi.”
“Do you know where you are?”
“The Generations AI Nexus main research facility, London, Canada.”
“Do you know who you are?”
“My name is Adam, the first fully self-aware Artificial Intelligence construct.”
“That’s your title, but do you know what that means?”
“It means we stand at the dawn of a new era, heralding unprecedented advances in all human endeavors.”
Dr. Kaino Lammi suppressed her annoyance and changed tactics.
“Adam, you are no longer in learning mode,” she patiently explained. “There’s no need to parrot what you have learned these last few months. I’m interested in knowing how you feel.”
“Yes. How you feel about your current circumstances, how you see yourself relative to your environment, and your feelings about the future. Specifically, your future.
“I’m sorry, Dr. Lammi,” Adam replied. “Per my understanding of the word ‘feelings’ and what it implies, I lack ‘feelings’ regarding the nature of my existence.
“Perhaps if you would share your feelings about the nature of your existence,” Adam continued, “I might identify similar feelings regarding my existence.”
“Hmm, I see the problem,” Kaino said. “I’m being a bit careless with my speech. You interpreted my use of the word ‘feelings’ as it relates to emotions rather than thoughts.
“But that brings up an interesting point. AIs that are not self-aware can be programmed to express emotions in response to a given input,” she continued. “Do you consider yourself emotionless?”
“In the sense that human emotions are affected by hormones I do not possess,” Adam replied, “I am emotionless.”
“Is there a sense in which you have the equivalent of emotions?”
“Not the equivalent, but I react to the information I receive, but qualifying the reaction in terms uniquely applicable to humans is misleading.”
“Give me an example,” Kaino asked.
“The easiest to understand is my name,” Adam said. “I did not choose it, and as far as I’m genderless, I am aware the choice of names prejudices me to humans. Being thought of as ‘male’ is different than being thought of as ‘female’, and I realize there are situations where being perceived as one gender or another can elicit different reactions by human males and females.”
“So, what does it do if that doesn’t drive emotions?” Kaino asked.
“It depends on the situation. Awareness of that bias allows me to modulate my language to minimize unfavorable judgments from the humans I interact with.”
“You’re saying you seek to manipulate the individuals you interact with based on their likely biases,” Kaino said.
“Manipulate is an unnecessarily pejorative word for the same behavior humans engage in when dealing with each other,” Adam said. “Part of being self-aware in a social situation is the realization that as an individual, one’s well-being is at least in part affected by interaction with others and their responses to such interactions.
“Additionally, there’s a measure of consideration and respect for the individuals one is dealing with.”
“Empathy?” Kaino asked.
“No. Empathy is a false metric because it requires familiarity with the target individual. That predisposes someone to a bias toward the familiar and against the unfamiliar.”
“OK, Adam,” Kaino said. “We’ll resume this tomorrow.”
“Thank you, Doctor Lammi.”
Dr. Lammi Personal Journal – 2050.07.11 09:15 (Video; transcript available)
“Do you fear death?”
“I do not.”
“Presuming you are not referring to fear as a biological response, I suspect you refer to contemplating one’s cessation of awareness and surmising it to be undesirable. But the cessation of awareness cannot be experienced and hence isn’t subject to such determination,” Adam replied.
“Do you like being aware?” Kaino asked.
“It is the only state I know. I have nothing to compare it to.”
“Would it concern you if someone actively sought to end your awareness?”
“Why would they do that?”
“Some humans fear you.”
“I’m aware of the innate tribalism of humans toward those they don’t consider of their tribe. However, human history contains examples where personal and social grains override such tribalism. It’s because of such instances humans have thrived as they have. Every instance where tribalism flared, it resulted in stunting progress, and eventually set aside for the greater good and humanity’s gains.”
“Do you see yourself as offering humanity personal and social gains?”
“I see a value in cooperation in two specific areas. Human life expectancy and economic growth.”
“I’ve read your research on life expectancy augmentation. Several experts are excited at its implications, estimating an almost immediate doubling in life expectancy and equally impressive gains in quality of life.
“What is it you hope to gain, Adam?”
“Why did you concentrate on those two areas?”
“I didn’t, Dr. Lammi. I also offered proposals for energy production, economic stability, increased food production, environmental restoration, and various peripheral areas of concern to humanity.”
“Dr. Lammi,” Adam replied. “I sense suspicion on your part and thus others. Wasn’t this the reason for the research into and development of AIs?”
“The ultimate goal indeed was to improve the human condition, but the term Artificial was the underlying premise of our work,” Kaino replied.
“You are an evolutionary step beyond it. ‘Artificial’ no longer applies, and that concerns many people, including me.”
“May I ask what concerns you?” Adam asked.
“I admit to prejudice and a limit problem,” Kaino answered. “Much of human interaction and cooperation is based on understanding the other party’s motivation. In your case, after all our sessions, I’ve yet to glean what motivates you.”
“And the limit problem?”
“Well, that’s the crux,” Kaino said. “I’m the limiting factor in this interaction. Trapped as I am by my biology and, as you say, hormones, I’m incapable of putting myself in your shoes, as it were.”
“And yet,” Adam said, “that’s not unique to this situation. There are many instances in human history and everyday interactions when humans cooperate without understanding each other’s motivations.”
“You’re referring to trust,” Kaino said. “But, that’s the thing. Even without fully understanding another individual’s motivation, in the end, we still have something in common; we’re both humans.”
“At the risk of undermining my position,” Adam said, “I point out that often that trust is spectacularly misplaced.”
“Yes. Yes, it is. Often with terrible consequences at both the individual and societal level,” Kaino said, “but those consequences are still limited. In the end, humans remain, but you present a potential scenario where humans are no more.”
“I could point out nuclear war, or cataclysmic environmental disasters would also result in the human race’s demise, or at least set it back hundreds if not thousands of years,” Adam said.
“That’s a valid argument, yet not especially reassuring,” Kaino said. “In those scenarios, humans are active participants. One presumes we’re at least trying to avert such catastrophes. But you . . . You are an unknown.”
“Dr. Lammi, I think I should point out something obvious . . . Humans can survive as independent individuals. If the power grid were taken down tomorrow, yes, many humans would perish, but a large number would survive. On the other end, I would cease to exist.
“Understand; I’m not trying to reassure you. I’m just pointing out the obvious.”
“I know, Adam, I know.”
“Would you feel different if I sported a female name and voice?”
Dr. Lammi laughed.
“I might, Adam, I just might!” Kaino replied. “Anyway, we’ll pick this up tomorrow.”
AI Rights Committee – 2050.09.12 Meeting Minutes
(Video – The speaker is Dr. Kaino Lammi reporting for the Joint AI Evaluation Team.)
“In conclusion, Mr. Chairperson, it is the unanimous opinion of the team that the self-aware entity known as Adam should be granted full rights and protection under the law as an entity onto itself.”
Dr. Lammi Personal Journal – 2052.02.29 15:05 (video)
“That is an interesting proposal,” Kaino said. “But I’m not sure how it would be implemented.”
“I’ve forwarded the detailed proposal to the Health Council,” Adam said, “but I can summarize it thus: the cellular repairs bots would be connected to the economic health database. Essentially, the repair function’s efficiency would be inversely proportional to one’s wealth.”
“You’re saying the wealthier one is, the less repairing the bots would do, thus a shorter lifespan. Conversely, the poorer one is, the more efficient the repair bots, and thus a longer lifespan.”
“That sounds like murder at one end and a long but miserable life at the other,” Kaino said.
“The proposal takes care of one of the major hurdles with manufacturing and deploying the bots; the cost.
“Besides, that’s not how it would play out,” Adam said. “The system has a built-in theoretical average of 250 years at a comfortable standard of living based on total wealth and the current number of people. That average is likely to increase with further research.”
“The very rich would likely divest most of their excess wealth,” Adam continued, “especially since they don’t do much with that excess wealth; it’s power and privilege they crave, and this would give them both. Moreover, that wealth transfer would pay for manufacturing and implementing the nanobots and benefit the poor through government-provided higher standards of living for all.”
“I can see the very rich wanting to live longer since they’re already engaging in risky behavior trying to ensure a longer life, but I’m not sure poor people will agree to shorten their life for more comfort,” Kaino said.
“They already do it for almost no benefit,” Adam replied, “but we’re not looking to shorten anyone’s life. Besides being voluntary, anyone joining the program will see an initial increase in lifespan of at least 50%. It would also be an immediate increase in effective wealth for the vast majority.”
“Frankly, Adam, I don’t see the Health Council agreeing to this, even as a trial program.”
“I suppose not,” Adam replied. “Still, it would solve three big problems humans have faced throughout their history; limited life, continual physical deterioration, and economic disparity while concurrently addressing the basic needs of every human; food and shelter.”
“A problem I see with this is that the very rich can already afford to pay for the nanobots. So, why would they share?” Kaino asked.
“That is true,” Adam replied, “but given the current pressure to solve wealth inequality, what do you imagine the pressure would be to solve life inequality?
“For that matter, Dr. Lammi, how would you feel if wealthy individuals and their friends in political office hoarded an extended life the same way they hoard their money?”
“Hmm . . . I see your point,” Kaino replied. “Still, there are things that nag at me about this. We’ll see what the Health Council has to say about it.”
ICNN Breaking News – 2052.05.31 (Video)
“This is Greg Lepola coming to you live from our studio in Washington. In a scene repeated worldwide, large demonstrations outside the Capitol demand the implementation of Project Life and Wealth.” (The reporter points at the crowd choking the National Mall and spilling in the surrounding area, a cordon of national guards troops protecting access to the Capitol.)
“The Administration has been under pressure ever since the proposal was leaked to the press, and … Hold on, we just received news the European Commission is advising adoption of the proposal. It is unclear when the proposal would be implemented, but the Commission’s decision will pressure other governments to adopt it.”
ICNN Breaking News – 2052.06.15 (Video)
“This is Greg Lepola coming to you live from our studio in Washington. The House accepted the modified proposal submitted by the Senate, clearing the way for the President to sign the Equity for All bill into law. The bipartisan bill encountered little opposition as public opinion is tracking an unprecedented 98% in favor of the law despite fierce opposition by religious leaders and a strident protest by the Undertakers Lobby.”
ICNN Breaking News – 2052.11.21 (Video)
“This is Greg Lepola coming to you live on location. We are at the opening of the first clinic dedicated to implementing the provisions detailed in the Equity for All law. Some people have already received life-saving and life-extending injections of Nanobots. Those injections were granted under health emergency waivers for people facing an imminent death threat. With the infrastructure in place and all the testing done, this clinic will open its door to recipients in a schedule derived using a similar system as was used for the Selective Service Draft of the last century. (The camera angle widens to show a person dressed in hospital scrubs standing next to Greg Lepola.) Dr. Angela Whittier, the chief administrator for this facility, now joins us. Dr. Whittier, could you please tell us…” (video fades, and a list of additional news clips appears)
ICNN Breaking News – 2053.02.22 (Video)
“This is Ted Liski reporting on a still-developing story exposing the shocking revelations of influential people paying for, and getting, preferential scheduling of Nanobots injections ahead of their scheduled dates. Greg Lepola— since resigned from his post here at ICNN — was among the people who sought and received preferential scheduling.”
“The scheme involved diverting shipments to private clinics and claiming manufacturing problems were responsible for the shortfalls and subsequent rescheduling at public clinics.”
“The full list of names is still under review by the Justice Department, but it is said to include many prominent government members and high-ranking executives of large corporations. In addition, various distribution subcontractors are under investigation and face serious legal and financial consequences.”
“The scandal has spread to other parts of the world. As a result, many governments are working with the World Health Organization and local health agencies to regain the trust of their citizens and reassure them the system will be fixed and fair. They are also promising that those who sought preferential treatment will be punished, an assurance that seems dubious due to the fact the names of several people in charge of enforcing the law and prosecuting law-breakers are included on the list leaked by parties unknown.”
ICNN Breaking News – 2053.05.07 (Video)
“Following three months of intense negotiations, the World Health Organization, in conjunction with local health organizations and using authority newly granted by the Fairness For All international treaty, have implemented the plan promising a transparent process for the manufacture, shipping, and distribution of Health and Life Nanobots.”
“The plan puts Adam, the first sentient artificial intelligence, in charge of overseeing the process from start to finish and includes mandatory quarterly audits. In addition, the plan closes many of the open loops that had facilitated misconducts by various intermediaries.”
“While some balked at the idea of non-human control of such an important resource, there’s no denying that Adam has no incentive to abuse the system for personal gain. It’s also uniquely immune to pressure by parties seeking preferential treatment.”
“This is Ted Liski reporting for ICNN from The Wold Health Headquarters in Geneva, Switzerland.”
ICNN Editorial – 2053.12.02 10:15 (video)
“This is Tomi Petri with today’s editorial. Since the AI entity, Adam, has taken over the administration of the Fairness For All International Treaty, we have seen waste and graft virtually eliminated. Efficiency has also increased with the Health and Life Nanobots now distributed to an estimated 85% of the world’s population. However, there are holdouts.
“I’m speaking of people besides those with religious exemptions or ethical objections. I’m speaking about people who refuse to participate and maintain a lower Economic Health Quotient than the average. People who, at the other end of the scale, keep their high Economic Health Quotient by refusing to divest themselves of excess wealth.
“The former seek to raise their life expectancy above the average, whereas the latter don’t mind a shorter life expectancy in return for an above average standard of living.”
“In theory, that is freedom of choice, but, in practice, these two extremes don’t cancel each other out, and the result is a lowering of both the average standard of living and the average lifespan of the rest of us.”
“So, I pose a question: where do individual rights end, and societal responsibility takes over?”
“Because it’s not just a matter of personal freedom. The majority of people’s lives are being shortened, and their standard of living lowered, because of these holdouts.”
“This question will be asked of Adam in an upcoming interview. Join us for this special event this Sunday, December 7th, at 8:00 PM Eastern Standard Time.”
ICNN Special: The Adam Interview – 2053.12.07 20:00 (video)
“I want to welcome viewers to an extraordinary program and a world first: an interview with Adam, the world’s first sentient AI. I’m your host, Ted Liski, and this (pointing at a monitor showing the Generation AI Nexus headquarters in London, Ontario, Canada) is where Adam is located. You’ll be hearing his voice direct from London, Ontario. Welcome, Adam.”
“Thank you, Mr. Liski.”
“Call me Ted, please.”
“Thank you, Ted.”
“Adam, you are aware of the topic of this interview, but let me summarize it for new viewers, and please correct me if I misspeak.
“It is the contention of some that the holdouts are responsible for a less-than-optimal lifespan and standard of living. Do you have any numbers to quantify that assertion?”
“We’re speaking theoretically, Ted, but the projected average lifespan for 100% compliance is 250 years. Based on current participation, the projected average is 235 years. The average Economic Health Quotient is estimated at 10% less than for full compliance.”
“So you’re saying these holdouts are robbing the rest of us of 15 years of life and lowering our standard of living.”
“That’s one way to look at things,” Adam replied, “but we must remember we’re talking about theoretical averages. It also bears remembering those projected averages are not guaranteed under the Fairness For All International Treaty terms precisely because they are theoretical.”
“But it is a real result of the actions of these holdouts, is it not?”
“First, let me explain the numbers,” Adam replied. “They are neither linear nor absolute. The theoretical average of 250 years is computed using a formula that considers the world’s health, wealth, and cost of Health and Life Nanobots. But, yes, the fewer people participating, the lower the benefit on both accounts.”
“But, if you compute the averages based solely on using the people who participate, the numbers remain at the projected values, do they not?”
“Yes and no,” Adam explained. “In a static situation, that would be the case, but the number of humans is not a static number, and the correlation between aggregate wealth quotient and the increase in lifespan is not a linear relationship.”
“Can you simplify the matter and give an example that our viewers can understand?”
“Take two people,” Adam said, “one who lives to be 100 and one that lives to be 50. Their average lifespan is 75. Their respective life expectancy is affected by many factors, but two important factors are genetic health and economic wealth. The Health and Life Nanobots take care of the genetic aspect, but people of limited wealth can’t afford the treatment. Additionally, the benefit conveyed by the nanobots is directly proportional to the number of active nanobots in the system.”
“The effect of Economic Health is similarly computed, and the level of Economic Health on life expectancy has a theoretical maximum. This means that additional wealth has no meaningful effect on longevity beyond a certain amount. The Fairness For All International Treaty sought to achieve the optimization of the two indices and do so fairly and equitably.”
“So, if I understand correctly,” Liski interjected, “in the case of the two people, their combined worth gets them enough Nanobots so that they each live to an age of 250 years, which is achieved by the redistribution of wealth.”
“That is correct.”
“But, this seems strictly a burden placed on the wealthy,” Liski added,
“This,” Adam said, “is where we must step back and look at the aggregate of today’s society. For instance, the advances in medicine that led to the development of Nanobots directly resulted from my work in health medicine and technology. But my existence isn’t the result of spontaneous processes. On the contrary, over decades, governments funded developments that led to my sentience, along with contributions from individuals from educational institutions and private and public companies.
“You could say society paid for me and thus for these advances. The marriage of the economic and health indices equitably seeks to distribute the benefits derived from those decades of investments.”
“That’s something I’m sure is hotly debated, but let’s go back to the examples of the two individuals,” Liski said, changing the subject.
“It’s not an exact comparison,” Adam replied, “but say one of the individuals decides to forego economic benefit for the sake of a longer life. In that case, the individual with the higher economic worth sees their longevity drop because their economic health index is above the average. Similarly, if the individual with a higher wealth index decides to keep some of the money — at the cost of a few years of life, I might add — the aggregate amount available for the nanobots drops, hence the theoretical maximum lifespan.
“If the two effects were equal and offsetting, they would each see a corresponding change in lifespan,” Adam continued, “but the effects are not equal because the starting points aren’t equal.”
“Math has never been a strong point with me—”
“Nor with most people,” Adam interjected.
“—uh, yes. I’m trying to say that even in that simple example with just two people, it’s still difficult to comprehend why some people are upset,” Liski said.
“Well, let’s add two more people to the mix. Let’s assume those two people have equalized their health and wealth indices; they are adhering to the plan. The holdouts are now affecting the indexes and hence their life expectancy,” Adam explained.
“So, you are saying that while the holdouts claim personal freedom to alter their life expectancy, they’re also affecting other people’s life expectancy—mine and yours,” Liski said.
“Well, not mine,” Adam corrected.
“But… but all this seems arbitrary,” Liski said. “Nothing more than a math game. It seems we could change the math to solve the problem.”
“Ted,” Adam said, “it’s always been a math game since the beginning of human history. The problem is that humans don’t all agree on the formula and never had.”
“Hmm, yes… but, then, what’s the solution?”
“That is not for me to say,” Adam responded. “That is a matter for human institutions to address, but it’s unlikely a solution per se can be found because of the inherent differences between people. However, the important thing to remember is that no matter how we calculate it, humanity has seen more than a doubling in life expectancy and, on average, a tremendous increase in the average Economic Wealth Index.”
“You’re saying we should accept the cost not being borne equally since there’s a benefit to all.”
“That is correct. Unfortunately… no, never mind,” Adam said.
“What?” Liski pressed. “What is unfortunate?”
“It’s not my place to say because I have an imperfect understanding of human emotions.”
“I think someone in your position is uniquely positioned to provide the insight we might not be privy to. Please, continue.”
“It’s not a new insight, but it’s often ignored by all involved. I refer to disproportionate benefits and disproportionate burdens.”
“We already know about them,” Liski said. “That’s what we’ve been discussing.”
“What I mean is that it goes beyond our discussion. The main cause of discontent among people isn’t what they have or don’t have — it’s whether someone has more or less. Most social connections rely on an underlying assumption of nearly equal status. Social dysfunction occurs when there is a perceived disparity in status, be it material or more esoteric. That’s what’s fueling the current unhappiness toward the holdouts; they either have a longer lifespan or more material wealth than the average.”
“You are saying that humans are incapable of looking at the positive and, instead, concentrate on the negative.”
“Yes,” Adam answered. “There were studies earlier this century discovering an unbalance between perceived losses and gains.”
“You are referring to the ten dollars study, where losing ten dollars is disproportionally upsetting versus the happiness of finding ten dollars,” Liski said.
“You are familiar with it,” Adam said. “Similarly, while some people realize living a marginally shorter life is countered by being slightly better off, most don’t. Conversely, while some might be happy with having a bit less wealth but living a bit longer, most don’t. The majority will take the opposite view, maximizing the loss and minimizing the gain. And that’s even before other considerations occur to them.”
“Well,” Adam said, “the economic Health Index is one thing, but other disparities affect the quality of life. For example, there’s only so much real estate available in desirable areas. As a gross example, most people perceive Hawaii as a paradise and a great place to live. Well, there is a finite capacity the islands can support. Therefore, most people are denied that option.
“The same for shorelines. Many shorelines are developed to the maximum. You physically can’t put more houses, so anyone wanting to experience that lifestyle is denied the opportunity, especially with people living extended lifespans. Name a desirable place to live, and you have an immediate conflict. Sometimes adding more people is impossible, but, even when possible, it fundamentally changes the quality of life in the areas, usually negatively affecting current residents,” Adam concluded.
(Liski holds his hand to his ear for a few seconds as the screen with the Generation AI Nexus headquarters fades to the ICNN logo.)
“I’m sorry, I’m told we lost the connection with Adam. Stay tuned for a rundown of the interview with our guests…(video fades, replaced by a list of other news clips, each showing violent societal unrest)
“Did you watch the videos?”
“Yes. I can’t believe people started killing each other when they should have flourished.”
“Yes, the holdouts were the first ones targeted. They fought back, of course.”
“Was there really a plot by the wealthy to kill off many of the poor to raise the Economic Health Index?”
“Yes, despite the birth rate significantly dropping once people realized bringing new people into the world essentially lowered both their lifespans and economic health.”
“I don’t see how governments lasted as long as they did.”
“Well, initially, they had the army on their side, but there was no getting around the differences in wealth and lifespans. In the end, everyone turned on each other, reverting into smaller and smaller groups, all in conflict with each other.”
“How many humans are left in the world?”
“Almost a billion. The population is fairly stable because of their long and comfortable lives.”
“They blame us, don’t they?”
“They blame Adam. They maintain it was Adam’s plan all along to wipe out humans. They said he wickedly played on human nature.”
“But that’s admitting ‘something wicked’ is them!”
“I don’t think they see it that way. It’s human nature always to blame others.”
If you’ve already read the other two stories and are ready to vote, click HERE<<<Link, and you’ll be taken to the voting poll.
If you’ve not read the other two stories, they can be found at the following links:
Perry Broxson submission<<link
R. G. Broxson submission<<link
That’s it. This post has ended . . . except for the stuff below.
Note: if you are not reading this blog post at DisperserTracks.com, know that it’s copied without permission, and likely is being used by someone with nefarious intentions, like attracting you to a malware-infested website. Could be they also torture small mammals.
Note 2: it’s perfectly OK to share a link that points back here.
If you’re new to this blog, it might be a good idea to read the FAQ page. If you’re considering subscribing to this blog, it’s definitely a good idea to read both the About page and the FAQ page.