This paper pursues the perception that enormous language fashions (LLMs) educated to generate code can vastly enhance the effectiveness of mutation operators utilized to packages in genetic programming (GP). As a result of such LLMs profit from coaching knowledge that features sequential modifications and modifications, they will approximate possible modifications that people would make. To focus on the breadth of implications of such evolution by giant fashions (ELM), in the principle experiment ELM mixed with MAP-Elites generates tons of of 1000’s of practical examples of Python packages that output working ambulating robots within the Sodarace area, which the unique LLM had by no means seen in pre-training. These examples then assist to bootstrap coaching a brand new conditional language mannequin that may output the proper walker for a selected terrain. The flexibility to bootstrap new fashions that may output acceptable artifacts for a given context in a site the place zero coaching knowledge was beforehand accessible carries implications for open-endedness, deep studying, and reinforcement studying. These implications are explored right here in depth within the hope of inspiring new instructions of analysis now opened up by ELM.