This is interesting because we have to load machines with a basic set of assumptions, and because of the accuracy of machinery and their long-term memory, they essentially will never forget these assumptions.
For example, if you were teaching a robot how to write, you would teach it the difference between your and your’re. This is a pretty standard grammar rule that I very often forget or ignore, leaving my writing pot-marked with errors.
Though that example is banal, if we look at localization, evolution, and culture: most “innovations” or differences arise either from error, accident, or deliberate changes to a set of assumptions.
The problem is, do we risk freezing culture at a point in time when we load our assumptions into the robot’s machine learning? Or are the scientist clever enough to introduce randomization and evolution into the machine learning? The problem is, how do you teach a robot to be both accurate but also open to doing things “wrong?” — the hallmark of creativity and innovation.