NOWNOWNOW<p><span>What if…<br><br>As far as comprehension and accuracy is concerned, we use a sophisticated LLM to do something like chain of thought but in doing so we also have it translate both the initial prompt and the CoT from (eg) English into a diverse range of other languages<br><br>All this is with the aim of retrying the result using different language prompts in an ongoing way, analyse the agreement or spread of the results, as a guide to likely accuracy.<br><br>Because, it could well be that if you give a prompt in English, you get a quite different result from an otherwise equivalent prompt in French, the Gaelics, Maltese, Chinese, Russian, Hungarian, Japanese, Mongolian and so many more, etc. Is one language “better” than another for prompting? It may be that on a contingency level, ie, per individual prompt, the LLM “understands” it differently per language – obviously it won’t be completely different but there may be subtle shifts in the task and reward, enough to make a difference to accuracy or depth. Creating a wide array of alternative choices at each step of the CoT may improve the situation by a sort of “voting/sampling” stage to choose the best result before moving forward.<br><br>It'd be vaguely like Sapir-Whorf hypothesis to gain a higher requisite variety of tokenisation richness.<br><br> </span><a href="https://toot.pikopublish.ing/tags/LLM" rel="nofollow noopener noreferrer" target="_blank">#LLM</a><span> </span><a href="https://toot.pikopublish.ing/tags/AI" rel="nofollow noopener noreferrer" target="_blank">#AI</a><span> </span><a href="https://toot.pikopublish.ing/tags/language" rel="nofollow noopener noreferrer" target="_blank">#language</a><span> </span><a href="https://toot.pikopublish.ing/tags/SapirWhorf" rel="nofollow noopener noreferrer" target="_blank">#SapirWhorf</a><span> </span><a href="https://toot.pikopublish.ing/tags/CoT" rel="nofollow noopener noreferrer" target="_blank">#CoT</a></p>