During development I encountered a caveat: Opus 4.5 can’t test or view a terminal output, especially one with unusual functional requirements. But despite being blind, it knew enough about the ratatui terminal framework to implement whatever UI changes I asked. There were a large number of UI bugs that likely were caused by Opus’s inability to create test cases, namely failures to account for scroll offsets resulting in incorrect click locations. As someone who spent 5 years as a black box Software QA Engineer who was unable to review the underlying code, this situation was my specialty. I put my QA skills to work by messing around with miditui, told Opus any errors with occasionally a screenshot, and it was able to fix them easily. I do not believe that these bugs are inherently due to LLM agents being better or worse than humans as humans are most definitely capable of making the same mistakes. Even though I myself am adept at finding the bugs and offering solutions, I don’t believe that I would inherently avoid causing similar bugs were I to code such an interactive app without AI assistance: QA brain is different from software engineering brain.
It started with a flash of insight like a thunderbolt in a snow storm, the sort of insight that can only be induced by high altitude hypoxia and making breakfast.
。业内人士推荐雷电模拟器官方版本下载作为进阶阅读
Dynamic AMOLED 2X, 120Hz adaptive refresh (1–120Hz), Up to 2,600 nits peak brightness
3月24日,北京市少年宫,学生科技节创客集市上,多所学校展示学生研发的主题文创产品。新京报记者 李木易 摄
d01777 0 0 0 /tmp