Copied and pasted content from this article to GPT-4 to explain it, had it play with me (in the browser, where I'm the intermediary), and now I'm making it write a Javascript that will automate my intermediary job (doing basic math).
ReAct is incredibly powerful. It not only extends the capabilities of any LLM in areas where they generally suck (being static, math), it also opens up the Black Box, you can see the way it reasons, as it the loop goes on.
I asked ChatGPT to check what's in my fridge by giving it access to my personal android, that can follow simple commands:
ReAct is a huge leap forward in explainability. Now it's possible to break down the reasoning of an LLM for debugging, all exposed in a human-readable form.
The piece on pirate wires was oddly chilling. Up until earlier today I just laughed about AI X-risk, but there's something about how you describe Altman's caution that gives me an actually bad feeling for the first time. I'm hoping this is just an artifact of the piece being well written.
I came across the ReAct pattern:
https://til.simonwillison.net/llms/python-react-pattern
https://interconnected.org/home/2023/03/16/singularity
Copied and pasted content from this article to GPT-4 to explain it, had it play with me (in the browser, where I'm the intermediary), and now I'm making it write a Javascript that will automate my intermediary job (doing basic math).
Can't get it automated, the send button remains greyed out until I do a legit keypress. But it works up until that point.
Maybe I'm too n00b as a JavaScript h4xx0r.
ReAct is incredibly powerful. It not only extends the capabilities of any LLM in areas where they generally suck (being static, math), it also opens up the Black Box, you can see the way it reasons, as it the loop goes on.
I asked ChatGPT to check what's in my fridge by giving it access to my personal android, that can follow simple commands:
https://www.magyar.blog/i/112089741/playing-react-with-chatgpt-web
ReAct is a huge leap forward in explainability. Now it's possible to break down the reasoning of an LLM for debugging, all exposed in a human-readable form.
The piece on pirate wires was oddly chilling. Up until earlier today I just laughed about AI X-risk, but there's something about how you describe Altman's caution that gives me an actually bad feeling for the first time. I'm hoping this is just an artifact of the piece being well written.