in

GitHub’s Copilot Enterprise hits general availability


GitHub today announced the general availability of Copilot Enterprise, the $39/month version of its code completion tool and developer-centric chatbot for large businesses. Copilot Enterprise includes all of the features of the existing Business plan, including IP indemnity, but extends this with a number of crucial features for larger teams. The highlight here is the ability to reference an organization’s internal code and knowledge base. Copilot is now also integrated with Microsoft’s Bing search engine (currently in beta) and soon, users will also be able to fine-tune Copilot’s models based on a team’s existing codebase as well.

With that, new developers on a team can, for example, ask Copilot how to deploy a container image to the cloud and get an answer that is specific to the process in their organization. For a lot of developers, after all, it’s not necessarily understanding the codebase that is a roadblock to being productive when moving companies but understanding the different processes — though Copilot can obviously help with understanding the code, too.

Image Credits: GitHub

Many teams already keep their documentation in GitHub repositories today, making it relatively easy for Copilot to reason over it. Indeed, as GitHub CEO Thomas Dohmke told me, since GitHub itself stores virtually all of its internal documents on the service — and recently gave access to these new features to all of its employees — some people have started using it for non-engineering questions, too, and started asking Copilot about vacation policies, for example.

Dohmke told me that customers had been asking for these features to reference internal information from the earliest days of Copilot. “A lot of the things that developers do within organizations are different to what they do at home or in open source, in the sense that organizations have a process or a certain library to use — and many of them have internal tools, systems and dependencies that do not exist like that on the outside,” he noted.

As for the Bing integration, Dohmke noted that this would be useful for asking Copilot about things that may have changed since the model was originally trained (think open source libraries or APIs). For now, this feature is only available in the Enterprise version and while Dohmke wouldn’t say much about whether it will come to other editions as well, I wouldn’t be surprised if GitHub brought this capability to the other tiers at a later point, too.

Image Credits: GitHub

One feature that will likely remain an enterprise feature — in part because of its associated cost — is fine-tuning, which will launch soon. “We let companies pick a set of repositories in their GitHub organization and then fine-tune the model on those repositories,” Dohmke explained. “We’re abstracting the complexity of generative AI and fine-tuning away from the customer and let them leverage their codebase to generate an optimized model for them that then is used within the Copilot scenarios.” He did note that this also means that the model can’t be as up-to-date as when using embeddings, skills and agents (like the new Bing agent). He argues that all of this is complementary, though, and the customers who are already testing this feature are seeing significant improvements. That’s especially true for teams that are working with codebases in languages that aren’t as widely used as the likes of Python and JavaScript, or with internal libraries that don’t really exist outside of an organization.

On top of talking about today’s release, I also asked Dohmke about his high-level thinking of where Copilot is going next. The answer is essentially “more Copilot in more places. I think, in the next year, we’re going to see an increasing focus on that end-to-end experience of putting Copilots where you already do the work as opposed to creating a new destination to go and copy and paste stuff there. I think that’s where we at GitHub are incredibly excited about the opportunity that we have by putting Copilot on github.com by having Copilot available in the place where developers are already collaborating, where they’re already building the world’s software.”

Image Credits: GitHub

Talking about the underlying technology and where that is going, Dohmke noted that the auto-completion feature currently runs on GPT 3.5 Turbo. Because of its latency requirements, GitHub never moved that model to GPT 4, but Dohmke also noted the team has updated the model “more than half a dozens times” since the launch of Copilot Business.

As of now, it doesn’t look like GitHub will follow the Google model of differentiating its pricing tiers by the size of the models that power those experiences. “Different use cases require different models. Different optimizations — latency, accuracy, quality of the outcome, responsible AI — for each model version play a big role to make sure that the output is ethical, compliant and secure and doesn’t generate a lower-quality code than what our customers expect. We will continue going down that path of using the best models for the different pieces of the Copilot experience,” Dohmke said.

Wendy’s betrays spicy nugget lovers everywhere and will introduce surge pricing

Wendy’s betrays spicy nugget lovers everywhere and will introduce surge pricing

Enkrypt AI Raises $2.4 Million to Build a Visibility and Security Layer for Generative AI

Enkrypt AI Raises $2.4 Million to Build a Visibility and Security Layer for Generative AI