Abstract
The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as practiced by Cantor, Turing, Gödel, and the advances, such as the Forcing techniques introduced by Paul Cohen and later investigators, b) Knowledge Hierarchies & Mapping Exercises, c) Discussions of IJ Good’s Speculations Concerning the First Ultraintelligent Machine, AGI, and Super- intelligence. Results suggest variability between major models like ChatGPT-4, Llama-3, Cohere, Sonnet and Opus. Results also point to strong dependence on users’ preexisting knowledge and skill bases. The paper should be viewed as ’raw data’ rather than a polished authoritative reference.