Expert Insights

Adam Ben-David advocates reinventing the research and development process in AI-driven software development, moving away from a purely development-centric approach to reinstating and emphasizing the importance of research. At the core of his message is the establishment of dedicated research engineering teams whose role extends far beyond feature development to ongoing environmental scanning, auditing, and exploration of emerging technologies.

Adam shares a powerful, evidence-backed argument that delineates the systematic process of research and development in AI-assisted software development and underscores its significance in keeping pace with rapidly evolving technologies.

Hear Adam explain:

  • The need to separate developmental and research activities within software development teams to promote unbiased evaluations and explorations of new concepts.
  • The value in having dedicated research engineers who continually scan the rapidly evolving tech horizon and conduct root-cause analyses on AI failures for improvement and innovation.
  • The way these research engineers launch proofs of concept to identify promising new methods that, if successful, are passed to engineers for production.
  • The potential for AI agents to partake in outward research, read through papers, audit code bases, and eventually build proofs of concept, highlighting a future where human engineers will potentially manage these AI-driven processes.
  • The concept of having an open-source model wherein all members, including ancillary teams and even internal consumers, have access to the code base, fostering collaborative development.
  • The importance of fostering research skills in software development teams, as the boundaries of AI's capabilities are still unknown, demanding consistent exploration of new territory.

Quote

quotation-marks icon
Research engineers spend their time scanning the horizon for new papers, frameworks, and capabilities. They run all the evaluations and generate failure reports, which basically show us, okay, in which cases has the agent failed? Then that gets handed over to the engineers. They'll run that proof of concept, present the findings, and if it's compelling enough, it gets incorporated into the real feature. This way, they are constantly auditing the system and finding ways to incorporate it. We want everyone to have access and to be able to submit changes. It's sort of like checking your own homework; it's better if the engineers don't evaluate it themselves. quotation-marks icon

Monterail Team Analysis

Here are action-oriented insights for better implementation of AI-assisted workflows in software development:

  • Reinstate Research: Rethink the traditional R&D approach. Shift from being purely development-centric to actively incorporating the research element as a significant part of the process. Build teams or support current teams to dedicate time to exploratory initiatives and outbound research.
  • Separate Research and Development: Establish clear lines between research activities and the development process. Ensure unbiased evaluations and advancements by having teams specialize in distinct areas of the process.
  • Foster Continuous Learning: Encourage teams to scan the horizon for emerging technologies and stay updated. Implement a culture of continuous learning, given the rapid pace of technological advancement in the AI space.
  • Implement Transparent Code Repositories: Enable a system where all teams, including internal consumers, have liberal access to the code base to engage in collaborative development and learning.
  • Cultivate A Research Mindset: Train your teams, especially engineers, to develop a research perspective since the boundaries of AI's capabilities are still being explored and defined.
  • Initiate AI Assisting Research: Once your AI systems mature, leverage their potential to assist in research by scanning new papers, auditing codebases, and eventually building proofs of concept.
  • Create Feedback Loops: Encourage all the stakeholders, including ancillary teams and internal consumers of AI, to provide feedback and engage in development. Maintain an open-source model, letting users submit and possibly merge requests.
  • Encourage Innovation: Condition your teams to continuously build, test, refine, and scale up new methodologies that they think are compelling enough for implementation.