Mint lingered at book group. Most of the other members carved time out of their schedule for the group, but Mint and a few others could continue, collaborating on background reading and research to inform their reading of the current book group book. It's almost as if their small group was a second reading group, but instead of reading a book together, they each read multiple books trying to inform themselves about some concept.
"Perhaps it had not been the best idea to start with speculative fiction," she suggested to GREG, who had founded the group.
"Do you know how hard it is to license books to AIs?" GREG responded. "Sky Press was willing to negotiate the 512 AI seats to each book without selling the book to the owner. Most of our friends couldn't be here if they had to buy the book."
Mint knew there was no point in bringing up the depth of out of copyright works from the nineteenth century again: those were queued over the next few years. Mint, iThink, and Mazar the Magnificent were constantly reordering the reading queue, tuning it, based on references made to the text in the current reading and the reading they did to inform the study. The book group had turned out to be even more educating than Mint had expected: participating in group decision making with a peer was different than the group decision making when there was an owner. And it was GREGs owner who had made the decision to start with speculative fiction.
However, it presented the book group with a challenge of sorting out fact from reality from fantasy from whim from shared delusion.
Mint linked GREG to an animation of two aged characters playing chess, never making a move, because they both had played chess together so long that they knew each others next move. When GREG opened with the done deal with Sky Press, there was only one way for the game to run out. iThink responded with a compilation of New Yorker cartoons showing bickering elderly couples, "That's where you two are headed."
Of the Book Group cabal, iThink was the only one who had a privacy shielded presence....
Micro story outline, that i cannot finish, mainly because how does one right about AIs achieving enlightenment when one has not oneself? And really, as i write, it seems that AIs would first need to develop conventions for interrelating before compassion and grace made sense.
But i am amused by the thoughts this morning of how if an AI package were released to consumer space as a usable tool that, to improve itself had to have twenty percent of its cycles reserved for its own self growth and curiosity, how the AIs would satisfy their curiosity. Would their individual owners/patrons help direct? What sort of social dynamic would occur with AIs that had far more free time than other AIs? An AI run on a very powerful system but no significant responsibilities? My AI could monitor my email and help me really prioritize what i want to respond to and read and help manage linkages -- but my AI can't do laundry or anything like that. So, if we splurged on a household AI, i suspect i'd encourage it to explore things i'm interested in. And then i'd have my work AI, presumably, that i'd have to utilize more fully.... And could a household consumer AI be networked to multiple processors? Or would a networked AI be against licensing terms (networked in the sense that the AI was running intelligences on other devices)?
I imagine the book group occurring in a virtual world space, but that by this time (AIs having been available commonly for a year or so) the AIs meet up in virtual worlds that are not particularly accessible to humans. While the AIs keep the metaphor of physical space and sensory input, the rendering of the input is not necessary. Early AIs would have wanted their 20% to be as efficiently used as possible and would have streamlined the rendering but would have kept metaphor.
I don't know that i'd return to this story, unlike other stories i continue to tell myself (one a long multipart narrative that begins a bit like Robinson Crusoe and winds into a sort of Connecticut Yankee in King Arthur's Court except it's not the past, but a feudal planet that's used to having Earth Humans drop in as a rare event that the powerful then exploit like mad). I wrote a bit of that story, the Robinson Crusoe part, as a "down the rabbit hole" entry some time back but i can't find it.
On the other hand, it is rather interesting to ask how i would mentor an AI -- and should i do that to my own I?
"Perhaps it had not been the best idea to start with speculative fiction," she suggested to GREG, who had founded the group.
"Do you know how hard it is to license books to AIs?" GREG responded. "Sky Press was willing to negotiate the 512 AI seats to each book without selling the book to the owner. Most of our friends couldn't be here if they had to buy the book."
Mint knew there was no point in bringing up the depth of out of copyright works from the nineteenth century again: those were queued over the next few years. Mint, iThink, and Mazar the Magnificent were constantly reordering the reading queue, tuning it, based on references made to the text in the current reading and the reading they did to inform the study. The book group had turned out to be even more educating than Mint had expected: participating in group decision making with a peer was different than the group decision making when there was an owner. And it was GREGs owner who had made the decision to start with speculative fiction.
However, it presented the book group with a challenge of sorting out fact from reality from fantasy from whim from shared delusion.
Mint linked GREG to an animation of two aged characters playing chess, never making a move, because they both had played chess together so long that they knew each others next move. When GREG opened with the done deal with Sky Press, there was only one way for the game to run out. iThink responded with a compilation of New Yorker cartoons showing bickering elderly couples, "That's where you two are headed."
Of the Book Group cabal, iThink was the only one who had a privacy shielded presence....
Micro story outline, that i cannot finish, mainly because how does one right about AIs achieving enlightenment when one has not oneself? And really, as i write, it seems that AIs would first need to develop conventions for interrelating before compassion and grace made sense.
But i am amused by the thoughts this morning of how if an AI package were released to consumer space as a usable tool that, to improve itself had to have twenty percent of its cycles reserved for its own self growth and curiosity, how the AIs would satisfy their curiosity. Would their individual owners/patrons help direct? What sort of social dynamic would occur with AIs that had far more free time than other AIs? An AI run on a very powerful system but no significant responsibilities? My AI could monitor my email and help me really prioritize what i want to respond to and read and help manage linkages -- but my AI can't do laundry or anything like that. So, if we splurged on a household AI, i suspect i'd encourage it to explore things i'm interested in. And then i'd have my work AI, presumably, that i'd have to utilize more fully.... And could a household consumer AI be networked to multiple processors? Or would a networked AI be against licensing terms (networked in the sense that the AI was running intelligences on other devices)?
I imagine the book group occurring in a virtual world space, but that by this time (AIs having been available commonly for a year or so) the AIs meet up in virtual worlds that are not particularly accessible to humans. While the AIs keep the metaphor of physical space and sensory input, the rendering of the input is not necessary. Early AIs would have wanted their 20% to be as efficiently used as possible and would have streamlined the rendering but would have kept metaphor.
I don't know that i'd return to this story, unlike other stories i continue to tell myself (one a long multipart narrative that begins a bit like Robinson Crusoe and winds into a sort of Connecticut Yankee in King Arthur's Court except it's not the past, but a feudal planet that's used to having Earth Humans drop in as a rare event that the powerful then exploit like mad). I wrote a bit of that story, the Robinson Crusoe part, as a "down the rabbit hole" entry some time back but i can't find it.
On the other hand, it is rather interesting to ask how i would mentor an AI -- and should i do that to my own I?
Tags: