Engineers Share The Projects They’re Most Excited to Tackle in 2026

Learn how engineers at Celonis, Clear Street, Cleo and elsewhere are preparing to tackle high-impact projects in the year ahead.

Written by Olivia McClure
Published on Dec. 12, 2025
Three software engineers review lines of code while working on a project together in an office
Photo: Shutterstock
Brand Studio Logo
REVIEWED BY
Justine Sullivan | Dec 15, 2025

Forget healthy diets and book-reading challenges — these engineering teams are heading into the new year with one thing on their minds: fast-paced, high-impact projects. 

Paul Griggs, a senior partner within PwC’s U.S. division, and his teammates are preparing to launch Industry Edge, a new program designed to help deliver sharper insights, faster solutions and deeper collaboration for the firm’s clients across a wide range of industries, including consumer markets and financial services. 

“We are going deep in each industry and wide across them — so leaders don’t miss the ripple effects that can mean the difference between reacting to change and shaping it,” he said. 

Meanwhile, at Clear Street, Staff Software Engineer Ben Becker and his peers are excited to continue transforming Clear Street Studio, an all-in-one portfolio management platform. He shared that his team is turning the platform into a more flexible, widget-based system that offers customizable workspaces, real-time data integration and more. 

“Studio is poised to become a flagship part of our client experience, and I’m genuinely looking forward to seeing users tailor it to their workflows and unlock the full potential of the tools we’ve been developing behind the scenes,” Becker said. 

For Senior Manager of Product Development Kumar Saurabh and his team at Cleo, 2026 will be a year defined by AI, as they continue leading an AI-powered project focused on supply chain orchestration. According to him, his team aims to create an integrated, intelligent system that optimizes end-to-end supply chain processes, which will ultimately reduce inefficiencies and boost agility.

“This initiative excites me because it combines cutting-edge technology with a critical business function, delivering measurable impact on cost, speed and customer satisfaction,” Saurabh said.

Griggs, Becker, and Saurabh, along with employees from 13 other companies, describe the projects they’re most eager to tackle in the new year, the technologies and practices driving each one, and the role each initiative plays in achieving companywide goals. 

Sarangadhar Sahani
Senior Director of Engineering • Celonis

Celonis leverages process mining and AI to create a digital twin, or dynamic digital model, of an organization’s end-to-end processes, providing a common understanding of how the business operates as well as where hidden value can be unlocked and how to capitalize on this value. 

 

Describe a project you’re especially eager to tackle in the new year.

Celonis Networks extends process intelligence beyond company borders, enabling secure, at-scale collaboration with business partners. It eliminates costly blind spots in shared workflows by providing real-time transparency on key process outcomes, like order or invoice status, allowing for early interventions. Its hub-and-spoke architecture simplifies partner onboarding and data harmonization, while its core design ensures security and relevance by sharing only this minimal, essential data rather than sensitive, granular activity. We will also spearhead critical data integration projects and drive key time-to-value initiatives.

 

“Celonis Networks extends process intelligence beyond company borders, enabling secure, at-scale collaboration with business partners.”

 

What technologies and/or practices is your team leveraging to tackle this project?

To tackle these projects, our team is leveraging hands-on experience in building and scaling cloud-native, distributed systems using a microservices architecture, Java and Spring framework, which we manage by employing best DevOps principles. This is supported by robust data engineering, featuring scalable Extract, Transform, Load and Extract, Load, Transform pipelines using Databricks to process and deliver reliable data. On the front end, we follow a user-centered design methodology with modern frameworks like Angular to deliver a highly user-friendly UI. Finally, we ensure reliability through embedded best quality practices, including test-driven development and comprehensive automated testing within our continuous integration/continuous development pipeline.

 

How does this project tie into larger company goals?

These projects are fundamentally tied to our larger company goals by serving as a primary engine for sustainable growth. It directly addresses the full customer lifecycle, focusing on optimizing customer acquisition, accelerating user activation to help them find value quickly and improving long-term sustenance to boost retention. To achieve this, we are strategically employing product-led growth techniques, meaning the product itself is being engineered to be the main driver of adoption, engagement and expansion, allowing us to scale our customer base more efficiently and build a self-reinforcing growth model.

 

 

Austin Z.
Director of Software Engineering  • Corporate Tools LLC

Corporate Tools’ entity management software is designed to help businesses more effectively handle legal and administrative tasks. 

 

Describe a project you’re especially eager to tackle in the new year.

I’m really excited to tackle a major scalability challenge with our mail processing system. We’ve got an automated pipeline that works great for processing millions of mail items for companies across the United States, but we’re hitting some serious bottlenecks, as we are receiving more and more mail every year. The big project this year is rebuilding core parts of the system to handle 10 times the volume: We’re talking about scaling up to process tens of millions of documents efficiently while maintaining the same level of accuracy in routing mail to our customers’ virtual mailboxes. 

 

“I’m really excited to tackle a major scalability challenge with our mail processing system.”

 

What technologies and/or practices is your team leveraging to tackle this project?

We’re overhauling the architecture with a focus on horizontal scaling, using Kafka to handle the massive message throughput, Kubernetes for better orchestration and resource management, and Ruby for our application logic. The key is optimizing our LLM integration for high-throughput document processing while maintaining accuracy. We’re also implementing better caching strategies and potentially moving some of the heavier AI processing to more efficient patterns that can handle the increased load without breaking the bank on compute costs.

 

How does this project tie into larger company goals?

By solving these scalability issues, we can improve the reliability and speed of this service for our existing customers who are already pushing our current limits. It’s basically removing the technical ceiling on our business growth for this product, ensuring we can handle whatever volume comes our way without degrading service quality.

 

 

Paul Griggs
Senior Partner, PWC U.S.  • PwC

PwC is a professional services firm that works with clients across a wide range of industries, such as energy, banking and healthcare, specializing in areas such as digital assets and cryptocurrency, generative AI and cybersecurity. 

 

Describe a project you’re especially eager to tackle in the new year.

I’m excited about the launch of Industry Edge, a bold new program designed to help deliver sharper insights, faster solutions and deeper collaboration across industries. Built on PwC’s deep sector expertise and cross-industry perspective, Industry Edge helps clients navigate change, anticipate impacts and unlock growth across five major industries: consumer markets, financial services, health industries, industrial products and services, and technology, media and telecommunications. To mark its debut, PwC is releasing five marquee perspectives that spotlight the forces reshaping each industry, illustrating the type of forward-looking insight at the core of the program.

 

“Built on PwC’s deep sector expertise and cross-industry perspective, Industry Edge helps clients navigate change, anticipate impacts and unlock growth across five major industries.”

 

What technologies and/or practices is your team leveraging to tackle this project?

Industry Edge is about helping our clients see around corners. We are going deep in each industry and wide across them — so leaders don’t miss the ripple effects that can mean the difference between reacting to change and shaping it.

How Industry Edge Ties into PwC’s Goals

“Industry Edge is built on PwC’s purpose-driven platform that combines: 

  • Connected insight: proprietary research, sector-specific outlooks, and real-time industry intelligence.”
  • Collaborative expertise: multidisciplinary PwC teams working across industries to anticipate and architect client futures.”
  • Proven impact: solutions that turn disruption into competitive advantage, grounded in data and technology.”

 

 

Sergio Mena
Senior Staff Software Engineer  • Circle

Circle’s platform connects traditional finance and digital assets in an effort to make global money movement as seamless as sending an email. 

 

Describe a project you’re especially eager to tackle in the new year.

I’m especially excited about Multi-Proposer, a major evolution of Arc, the open Layer-1 blockchain we’re building at Circle. Arc combines a high-performance consensus engine with ethereum virtual machine-compatible execution, designed to provide scalable and reliable infrastructure for developers and financial applications. Today, each block in Arc is proposed by a single node. With Multi-Proposer, multiple nodes will propose in parallel — boosting throughput, censorship resistance and resilience against delays or manipulation.

 

“Today, each block in Arc is proposed by a single node. With Multi-Proposer, multiple nodes will propose in parallel — boosting throughput, censorship resistance and resilience against delays or manipulation.”

 

Multi-Proposer primarily lives at the consensus layer, but we’re also introducing additional innovations to make multiple proposers possible. It’s a complex engineering challenge that requires coordination between components that traditionally operate one after another. By tackling it, we’re building a network that remains fast and reliable even under demanding or adversarial conditions. This project blends deep distributed systems theory with practical engineering, the kind of problem that makes building on Arc both challenging and deeply rewarding.

 

What technologies and/or practices is your team leveraging to tackle this project?

Our stack is built primarily in Rust, giving us safety and performance for consensus-critical code. We rely on Reth, the modular Ethereum execution client, and Malachite, a Byzantine Fault Tolerance consensus engine, to orchestrate Multi-Proposer logic. Arc coordinates consensus and execution through the standard EngineAPI interface. Within the consensus layer, Malachite ensures proposals are consistently ordered across the network, supporting both efficiency and fault tolerance.

On the practices front, we focus on reproducibility, correctness and disciplined engineering. Our agile workflow with Jira promotes iteration and fast feedback. Every change undergoes rigorous code review, continuous integration checks and extensive testing, from unit and integration tests to full end-to-end and system testing. These practices help ensure our releases remain reliable, secure and predictable.

 

How does this project tie into larger company goals?

Multi-Proposer advances Arc’s mission to provide a trusted foundation for real-world economic activity with blockchain infrastructure that is efficient, distributed and censorship-resistant. By allowing multiple proposers to operate at once, we eliminate single points of contention and create a fairer, more resilient consensus process.

Arc is also distinctive in that its native gas asset is USDC, enabling stable and predictable dollar-denominated transaction fees, which provides simplified user experiences and accounting for global businesses. This aligns perfectly with Circle’s broader mission of raising global economic prosperity through the frictionless exchange of value — a world where trillions in stablecoins and financial transactions happen natively on the internet. Arc is at the foundation of that future: a global platform that enables scale, speed and coordination across people, companies and even machines.

 

 

Daniel Wickert
Senior Engineering Manager • HERE

HERE Technologiesunified live map provides location data that helps automakers and enterprises build reliable automated driving systems, improve electric vehicle battery consumption and more.

 

Describe a project you’re especially eager to tackle in the new year.

As a senior engineering manager, I am responsible for HERE WeGo engineering: HERE WeGo mobile apps, the HERE WeGo website and our app back-end services. A constant challenge when working on complex software is quality assurance. Automated tests help to ensure things keep working. We have many different kinds of automated tests: unit tests, component tests and end-to-end tests. This year, WeGo and the HERE SDK QA teams have automated a large portion of our smoke and regression test suite that our QA team used to execute manually.

 

“This year, WeGo and the HERE SDK QA teams have automated a large portion of our smoke and regression test suite that our QA team used to execute manually.”

 

How Wickert’s Team is Enhancing End-to-End Testing

“Now that we’ve laid the groundwork, I am super excited to bring our end-to-end testing to the next level. 

  • More visual tests: Ensure the expected map features and overlay UI are correctly rendered.
  • Better integration into team processes: End-to-end tests give early feedback, but handling failed tests is still very manual. A QA engineer looks at the failure, manually reproduces and creates tickets and more. Much of the work can and should be automated.”
  • Explore how AI can help with the above: We already use GitHub Copilot to support test case development. We already employ OpenCV for visual testing, but we only scratched the surface.”

What technologies and/or practices is your team leveraging to tackle this project?

HERE WeGo is mostly written in Flutter with some native code for Android and iOS. Most of our tests are written in Flutter as well using our own custom framework using flutter_driver. These tests are maintained by the app developers. End-to-end tests are maintained by the QA team instead and are written in Appium. Appium is the industry standard for mobile end-to-end testing, so it is easy to pick up and integrate with other tools like Cucumber. Our test engineers further use GitHub Copilot to speed up test development and integration with other systems.

 

How does this project tie into larger company goals?

We see HERE WeGo as a guardian of the quality of the HERE tech stack. Our testing ensures quality not just for HERE WeGo itself but also our enablers and services — HERE SDK, HERE location services, offline maps and more. Of course, all teams do their own testing as well, but we put it all together in a real-world consumer application. This gives us an end-to-end view on all our products. With our end-to-end test suite, we can provide early feedback to our enablers and find defects much earlier, which in turn prevents shipping these defects to our customers.

 

 

RELATED READING: How a Culture of Learning Unlocks Innovation on These Engineering Teams

 

Ben Becker
Staff Software Engineer • Clear Street

Clear Street offers a cloud-native brokerage and clearing system that’s designed to add efficiency to the market while transparently minimizing risk and cost for clients. 

 

Describe a project you’re especially eager to tackle in the new year.

As a staff software engineer at Clear Street, I’ve had the opportunity to contribute to a wide range of fast-paced, high-impact projects. Looking ahead to 2026, I am most excited about the work we are putting into Clear Street Studio™, which has become my primary focus going into the new year. Clear Street Studio is our all-in-one portfolio management platform — spanning our execution management system, portfolio management services, risk management system and trade processing — and we are excited to be developing a range of enhancements in service to our clients.

 

“Clear Street Studio is our all-in-one portfolio management platform — spanning our execution management system, portfolio management services, risk management system and trade processing — and we are excited to be developing a range of enhancements in service to our clients.”

 

We’re evolving Studio into an even more flexible, widget-based system featuring customizable workspaces, composable dashboards, real-time data integration and dynamic linking across components. Bringing all of this together has been a major cross-team effort, but that collaboration is part of what makes the project so energizing. Studio is poised to become a flagship part of our client experience, and I’m genuinely looking forward to seeing users tailor it to their workflows and unlock the full potential of the tools we’ve been developing behind the scenes.

 

What technologies and/or practices is your team leveraging to tackle this project?

What excites me most is how we’re bringing AI into the platform. Studio can deliver the full firehose of information that professional traders need, but that volume can be overwhelming. AI helps by highlighting what matters, surfacing key insights and allowing users to drill down or launch deeper research from the same interface. 

We’re also exploring AI as a trading and platform assistant that can recommend the right widgets, pull up relevant charts and answer questions about positions or risk. Behind the scenes, this requires tight integration across our data systems, permissions and AI services to ensure the outputs are accurate, safe and genuinely useful.

 

How does this project tie into larger company goals?

One of the unique things about Clear Street is how much of the trading stack we cover. While many fintech companies focus on a narrow slice like retail trading or custody, we span the full lifecycle, from execution to clearing, financing and everything in between. That breadth is great for clients and employees, but it also creates a UX challenge: How do you present all that functionality in a way that works for very different types of users?

Traders, risk managers, operations teams and analysts all need distinct tools and workflows. Studio is our solution. By shifting to a widget-based model, we can tailor the experience to each persona without fragmenting the product. A portfolio manager can build a dense, real-time trading dashboard, while an operations user might rely on filtered grids and exception tools. Even someone new to trading can start simple and grow into advanced views. In this way, Studio supports our goal of offering one unified platform that still feels “just right” for every client and every team.

 

 

Kumar Saurabh
Senior Manager of Product Development • Cleo

Cleo’s platform is designed to give organizations visibility into critical end-to-end business flows happening across their ecosystems of partners and customers, marketplaces, and internal cloud and on-premise applications.

 

Describe a project you’re especially eager to tackle in the new year.

In the coming year, I’m especially eager to lead a project focused on supply chain orchestration powered by AI. The goal would be to create an integrated, intelligent system that optimizes end-to-end supply chain processes — improving visibility, resilience and decision-making. By leveraging AI to enhance operational metrics such as cycle time, order accuracy and fulfillment speed, we can reduce inefficiencies and boost agility. This initiative excites me because it combines cutting-edge technology with a critical business function, delivering measurable impact on cost, speed and customer satisfaction.

 

“By leveraging AI to enhance operational metrics such as cycle time, order accuracy and fulfillment speed, we can reduce inefficiencies and boost agility.”

 

What technologies and/or practices is your team leveraging to tackle this project?

To tackle this project, our team is leveraging a combination of advanced technologies and best practices. On the technology side, we’re using AI-driven optimization engines, ML models for demand and supply forecasting, and cloud-based orchestration platforms to ensure scalability and real-time visibility.

 

How does this project tie into larger company goals?

This project directly supports Cleo’s strategic goals of driving operational excellence and delivering superior customer experiences. By implementing AI-driven supply chain orchestration, we enable real-time visibility, faster decision-making and improved operational metrics such as cycle time, order accuracy, fulfillment speed and on-time in-full performance. These improvements align with Cleo’s commitment to innovation, scalability and resilience, strengthening our ability to meet customer expectations globally. Ultimately, this initiative positions Cleo as a leader in intelligent supply chain solutions, reinforcing our mission to simplify and optimize complex business ecosystems.

 

 

Josh Pohl
Senior Front-End Engineer  • AKASA

AKASA’s AI-powered platform is designed to optimize revenue cycle management for healthcare systems. 

 

Describe a project you’re especially eager to tackle in the new year.

I’m especially excited to build more capable AI for complex healthcare workflows. Over the past year, we’ve made meaningful progress applying advanced ML and LLMs to extract structure from highly variable clinical and operational documents. In the new year, we’re pushing this even further and expanding the system’s ability to reason across multiple documents, understand nuanced context and surface richer, more accurate outputs. This next phase lets us break through some of healthcare’s most persistent administrative bottlenecks and deliver AI that feels more adaptive, more trustworthy and, ultimately, more helpful to the teams who rely on it every day.

 

“Over the past year, we’ve made meaningful progress applying advanced ML and LLMs to extract structure from highly variable clinical and operational documents.”

 

What technologies and/or practices is your team leveraging to tackle this project?

We’re combining state-of-the-art language models with carefully engineered retrieval, validation and reasoning pipelines that reflect real-world healthcare workflows. Our team uses modern ML development tooling, including AI-assisted coding environments, to iterate quickly and with confidence. These tools help us test alternative architectures, generate prototypes, and validate assumptions in hours instead of days. We also lean heavily on collaborative engineering practices like tight feedback loops, well-instrumented experiments and continuous evaluation against real encounter data. Together, these technologies and practices allow us to build systems that are not only powerful but also aligned with the accuracy, reliability and transparency standards that healthcare requires.

 

How does this project tie into larger company goals?

This work sits at the center of AKASA’s broader mission: bringing AI to some of the hardest, most consequential problems in healthcare. By teaching our systems to understand complex clinical information with the nuance of a human expert, we’re opening the door to more accurate clinical documentation and fewer barriers across the care journey.

 

 

Georg Ulrich
AI Engineer  • Cedar

Cedar’s AI-powered platform is designed to improve the healthcare financial experience for both providers and patients. 

 

Describe a project you’re especially eager to tackle in the new year.

In the new year, I’m excited to keep advancing Kora, our AI agent for patient billing support. Many people still prefer the phone when they have questions about their medical bills, and those calls often happen at stressful moments. We’ve already seen how Kora can make these conversations clearer and faster by authenticating callers, gathering the right context and resolving questions that map to cases AI can safely handle. The team and I are going to build on that foundation and continue improving how Kora prepares information for human agents so they can focus on the more complex cases. For me, this project is about giving patients quick, accurate answers, expanding Kora’s capability to respond to increasingly difficult and varied questions, and helping agents get the clarity they need to support someone well.

 

“For me, this project is about giving patients quick, accurate answers, expanding Kora’s capability to respond to increasingly difficult and varied questions, and helping agents get the clarity they need to support someone well.”

 

What technologies and/or practices is your team leveraging to tackle this project?

We’re building on the combination of high-quality AI models and Cedar’s deep infrastructure of integrations, billing logic and patient-specific data. We can’t script every answer because people ask follow-up questions that take calls in many directions, so we rely on AI systems that can stay within the right data and guidelines while responding naturally. To ensure safety, we use simulation, where Kora practices with an AI acting as the patient; perform a human review of selected calls; follow deterministic metrics that show how people move through the flow of a call; and employ AI-driven issue detection that flags outliers. Recently, we shifted toward giving the AI more freedom in phrasing, as long as it stays accurate, and we’ve already seen more patients completing flows and saying, “That’s all I needed.”

 

How does this project tie into larger company goals?

This work directly supports Cedar’s mission to empower us all to easily and affordably pursue the care we need. By letting Kora resolve appropriate questions and prepare the essential context before handing off to an agent, we’ve reduced handle times up to 25 percent, even with the complexity of healthcare calls. As more providers come online and call volumes grow, continuing to mature Kora helps Cedar deliver clear, dependable, patient-centered support at scale with the level of accuracy people deserve when dealing with their healthcare bills.

 

 

Ryan Pinnock
Solutions Engineer II  • TrueML

TrueML is a fintech company with a family of subsidiaries, all focused on revolutionizing the experience of consumers seeking financial health.

 

Describe a project you’re especially eager to tackle in the new year.

I’m especially excited to take the Riverty work we completed and evolve it into a fully standardized “European Creditor Ingestion and Reconciliation Framework.” We proved we can handle complex files end to end, and now I want to scale that into something repeatable across multiple portfolios.

The project starts simply: giving teams a consistent, reliable way to upload creditor files, validate them and generate clean placement, adjustment and reconciliation outputs. As the work deepens, I want to make the entire ingestion process more automated and less dependent on manual checks. Technically, this means transforming our current Riverty mapper into a modular pipeline that can parse debt/contract/claims/credit structures, enforce strict validation rules, run waterfall credit allocation, detect duplicates and automatically reconcile new files against historical placements to prevent double-applying credits. Ultimately, I want to build a reusable ingestion template that significantly reduces onboarding time for any European creditor.

 

“Ultimately, I want to build a reusable ingestion template that significantly reduces onboarding time for any European creditor.”

 

What technologies and/or practices is your team leveraging to tackle this project?

We’ll approach the project with a mix of accessible tooling and deeper engineering infrastructure. At a high level, the team will rely on clear documentation, iterative testing with operations and tight collaboration to refine each creditor’s mapping and logic. We’ll continue using sample data to validate assumptions early and avoid rework.

On the technical side, the project will use Retool for the self-serve UI where operations can upload files and review validation results. The mapper and recon logic will be written in JavaScript and Python, leveraging PapaParse, JSZip and Pandas for parsing, transformations and file generation. Snowflake will serve as our system of record for placements, CRED events, historical runs and recon tables. We’ll use Jenkins jobs, or an equivalent continuous integration orchestrator, for scheduling automated ingestion flows. We’ll also incorporate improved logging, error-flagging, duplicate detection and pre-ingestion reconciliation as standard practices across all new creditors.

 

How does this project tie into larger company goals?

This project supports broader company goals by reducing operational overhead, improving data trust and accelerating new client activations. Starting from a simple goal — make file ingestion smoother — it has grown into something that directly impacts efficiency and client satisfaction.

As the pipeline becomes standardized, onboarding new creditors becomes faster and less risky. Cleaner data leads to more accurate treatment strategies, fewer consumer issues and more predictable recoveries. Automating checks like duplicate invoices, credit-application prevention and placement validation reduces the likelihood of balance disputes and ensures consistent processing across portfolios.

From a technical alignment standpoint, the framework helps scale our ingestion capabilities without proportionally increasing engineering lift. It turns custom creditor logic into a productized, reusable asset that enhances reliability, reduces turnaround time and directly supports revenue growth by enabling us to take on more portfolios with higher confidence.

 

 

Stephen Flynn
Lead Back-End Engineer  • ChowNow

ChowNow’s platform enables independent restaurants to offer commission-free ordering, build branded mobile applications, simplify their takeout operations and create marketing campaigns. 

 

Describe a project you’re especially eager to tackle in the new year.

In the first half of 2024, ChowNow purchased Cuboh Software. Over the last year and a half, we’ve worked with our colleagues in Cuboh to harmonize our infrastructure. With that effort behind us, we want to build on that work to provide even more value to our restaurants. While we’ve tightly integrated Cuboh’s engineering systems with our ChowNow systems, we want to continue building on that integration to take full advantage of it. 

My teammates and I are looking forward to working with our product and domain experts to improve the user experience and simplify the process for all involved. We will collaborate with our colleagues across the company to revitalize our onboarding and billing systems to make it easier for our restaurant partners to access the complementary features and functionality found across our product offerings. Not only will we look to our internal resources, but this is our opportunity to leverage our third-party software providers. I’m looking forward to the challenge of meeting the business requirements for billing and payments while simplifying the path a restaurant takes from initial inquiry to its first online orders.

 

“I’m looking forward to the challenge of meeting the business requirements for billing and payments while simplifying the path a restaurant takes from initial inquiry to its first online orders.”

 

What technologies and/or practices is your team leveraging to tackle this project?

We’ve recently reorganized the engineering organization into specific domains. The aim of this reorganization is to equip our individual development teams with the resources necessary to see a large-scale project from progress conception to completion. Each domain is a micro-company within ChowNow. Working with the product organization, we will have input into how we tackle requirements and be able to take ownership of meeting the company goals.

As an engineering organization, we’ve invested heavily in AWS and so will be taking advantage of what it has to offer. Onboarding and billing have an influence over every aspect of a client’s experience with the data from both making its way through multiple different services and software platforms. To keep these systems independent of, albeit communicating with, each other, we plan to take advantage of the event-based solutions AWS provides. 

Additionally, AI is opening up new avenues for us to automate and simplify onboarding new restaurants. Taking advantage of ML to process restaurant data will reduce the time restaurant owners will have to spend getting online with ChowNow.

 

How does this project tie into larger company goals?

Helping independent restaurants thrive is ChowNow’s long-held North Star. The restaurant industry is very competitive, and the less time restaurant owners have to spend away from the business of satisfying customers, the better. Simplifying our onboarding and billing process will allow restaurants to get online faster and start taking orders sooner. We love to hear from our restaurant partners, but the less time a restaurant owner has to spend working with us, the more time they have to focus on delivering great food to eager customers. 

Not only does this project align with our North Star, but it also aligns with our other goals for 2026. This includes increasing the number of restaurant partners on our platform. Simplifying and accelerating the onboarding process will reduce the amount of restaurants that don’t complete the onboarding steps and directly lead to increasing our partner count. To achieve that simplification, we plan on leveraging more automation and AI. This is another one of our 2026 goals: identifying where automation and AI can be applied thoughtfully and effectively.

 

 

Jake Helman
Senior Engineering Manager  • PrizePicks

PrizePicks is an independent skill-based fantasy sports operator covering a wide range of sports leagues, from the NFL to the NBA.  

 

Describe a project you’re especially eager to tackle in the new year.

Enabling the future of prediction markets is the project that keeps me up at night in the best way. This represents a completely new business line for PrizePicks, and my team, Pulse (Live Scoring and Data Platform), is the foundation that makes it possible. We need to unify data models across daily fantasy sports and prediction markets, build real-time stat calculation engines and support entirely new market types. The stakes are high, the timeline is aggressive, and we’re building the infrastructure that could define PrizePicks’ next chapter.

 

“Enabling the future of prediction markets is the project that keeps me up at night in the best way.”

 

What technologies and/or practices is your team leveraging to tackle this project?

We’re building a comprehensive platform with standardized APIs, software development kits and self-service tooling that dramatically reduces integration time. We’re establishing API versioning standards to achieve zero breaking changes, which is crucial when you’re becoming the single source of truth. We’re also implementing comprehensive onboarding documentation, architecture guides and migration runbooks. The goal is to make Pulse so developer-friendly that product teams actively want to use it rather than build their own solutions.

 

How does this project tie into larger company goals?

PrizePicks’ strategy is to offer a best-in-class unified experience across daily fantasy sports and prediction markets that competitors can’t match. That requires a robust, unified data foundation, which is exactly what we’re building. When product teams can launch features in weeks instead of months, when customers experience perfect data consistency across all touchpoints, and when we can support niche sports and cross-sport experiences that differentiate us in the market, that’s how we build a competitive moat. The platform becomes strategic infrastructure that’s hard to replicate.

 

 

Chris Nusbaum
Senior Manager of Product Engineering • KUBRA

KUBRA develops customer experience management solutions for utility, insurance and government entities. 

 

Describe a project you’re especially eager to tackle in the new year.

On our messaging team, we are excited to architect and deliver a new service responsible for sending timely payment reminders to end users with upcoming bills. This platform will process and deliver millions of notifications per day across multiple channels, which requires a highly resilient and scalable system design.

 

“On our messaging team, we are excited to architect and deliver a new service responsible for sending timely payment reminders to end users with upcoming bills.”

 

We’re implementing a multi-stage validation pipeline that performs over a dozen checks on consumer data, billing metadata and delivery eligibility. Each step must execute with low latency and high accuracy to ensure we’re generating the right reminder at the right time. We’re also building robust safeguards to eliminate single points of failure, including distributed message queues, retry logic, redundancy across services and proactive monitoring to detect anomalies before they impact users.

This is the kind of engineering challenge that excites me: high-volume event processing, fault-tolerant architecture and system-level thinking. The work directly supports our broader goals around customer engagement and reliability, and it allows us to meaningfully improve how millions of people interact with their bills and payments every day.

 

What technologies and/or practices is your team leveraging to tackle this project?

For this service, we’re integrating Temporal.io to significantly increase reliability and consistency across our workflow orchestration. Given the volume of end-user accounts and billing data we need to process, our system relies on a network of microservices that independently manage payments, account details and delivery logic. While this architecture enables scalability, it also introduces numerous potential failure points.

Temporal allows us to solve this in a clean and robust way. We can create durable workflows that maintain state for each notification, pause them for hours or even weeks and reliably resume exactly where they left off. This ensures that any platform outages, microservice failures or transient network issues during the dormant window do not impact the final delivery of a payment reminder.

Another key factor in choosing Temporal is its built-in support for retries, backoff strategies and failure recovery. Instead of engineering our own orchestration and reliability layer, Temporal provides these capabilities out of the box, dramatically reducing development time during a critical phase of the project. In short, Temporal gives us a fault-tolerant foundation for high-volume event processing, which aligns perfectly with the reliability requirements of this system.

 

How does this project tie into larger company goals?

By designing this service with strong reliability and state management in mind, we can ensure that every notification is delivered on time and with the correct information, no matter what occurs in the system between receiving bill data and the scheduled send. That level of consistency is essential. Even a single missed or inaccurate reminder can create confusion or frustration for an end user.

Providing a high-quality experience is a core priority for us at KUBRA. We want every interaction to reinforce trust and make it easier for people to stay informed and in control of their payments. This project is an important step toward strengthening that commitment and delivering dependable, user-focused solutions at scale.

 

 

Jacob Shafer
Head of AI Innovation  • Adswerve, Inc.

Adswerve, Inc. is a data, media and tech consultancy that aims to help organizations unlock digital marketing opportunities.

 

Describe a project you’re especially eager to tackle in the new year.

We are launching Support+, an ambitious internal initiative designed to deliver a massive upgrade to our already renowned client support experience through applied machine learning.

Our support team challenged us to leverage our immense historical knowledge base — decades of client conversations and resolution strategies — and make that expertise available 24/7. To achieve this service upgrade, our development team is building and training a sophisticated suite of AI agents designed to provide instant, real-time responses to common queries.

This is more than just a tool; it's about augmenting the reach and efficiency of our human experts. The agents will serve as a constant, immediate resource for both clients and internal teams, ensuring our institutional expertise is always instantly available. This enhancement allows our human support team to operate with even greater confidence and focus, cementing our commitment to delivering the same personal, warm and elevated guidance our clients rely on. It’s an evolution, not a replacement, but it enables our clients and internal teams to have an immediate resource.

 

“The agents will serve as a constant, immediate resource for both clients and internal teams, ensuring our institutional expertise is always instantly available.”

The Tech Shafer’s Team Relies On to Build Support+

“The technical stack is centered on advanced AI and modern serverless architecture, designed to optimize developer velocity and scalability.

  • AI and Agent Development: 
    • We utilize Google’s Gemini LLMs, maintaining flexibility to integrate other models as needed. 
    • Agent construction relies on a powerful Agent Development Kit to build experts capable of complex, long-running reasoning tasks. This architecture allows AI agents to engage in contextual dialogues via the Model Context Protocol and directly interact with necessary systems using the OpenAPI standard, simplifying integration.”
  • Infrastructure and DevOps: 
    • Our environment is anchored in Google Cloud Platform. 
    • We leverage Google Cloud Run for hosting, which drastically reduces DevOps overhead by managing container orchestration, networking and load balancing, allowing engineers to focus on code. 
    • Security is managed through Google Secrets Manager for user grants and Firebase Auth for robust authorization flows. 
    • Our continuous integration/continuous development pipeline ensures rapid, secure deployment via the seamless integration of BitBucket and Google’s Workload Identity Federation.”

 

How does this project tie into larger company goals?

Support+ is directly aligned with our three strategic pillars, acting as the digital engine for our growth and client value.

Our expert status: We pride ourselves on being a solutions-driven partner. By training the AI on our expert knowledge, we’re guaranteeing our top-tier expertise is consistent and instantly available 24/7. It’s the ultimate client commitment.

Expanding our tech: We’re diving into complex new platforms. Support+ gives us the scalable support backbone we need to launch these sophisticated capabilities smoothly. If we expand our capabilities, our support needs to expand, too!

Building the future: This is a key investment in our AI-enabled services vision. We’re actively building the future of client service, which adds big value to our offerings and confirms we are a company that bets big on innovative tech.

 

 

Scott Laurence
Director of Business Intelligence • First Entertainment Credit Union

First Entertainment Credit Union offers financial services to individuals and organizations in the entertainment industry. 

 

Describe a project you’re especially eager to tackle in the new year.

We’re excited to build a data analytics platform, a modern data warehouse designed to transform how we use data. This initiative will centralize disparate sources into a single, governed environment, enabling near real-time insights for leadership and frontline teams. Beyond technology, DAP represents a cultural shift — empowering every decision-maker with trusted data. It’s not just infrastructure; it’s the foundation for a data-driven future where strategy and operations align seamlessly.

 

“This initiative will centralize disparate sources into a single, governed environment, enabling near real-time insights for leadership and frontline teams.”

 

What technologies and/or practices is your team leveraging to tackle this project?

Our approach combines proven technologies with agile practices. We’re leveraging SQL Server for robust data management and Python for automation and advanced analytics. Governance is embedded through standardized schemas and role-based access. We also prioritize stakeholder engagement and objectives and key results to ensure alignment with business objectives. This isn’t just about tools — it’s about creating scalable, transparent processes that make data accessible and actionable across the organization.

 

How does this project tie into larger company goals?

DAP directly supports our mission to foster a data-driven decision-making culture. By democratizing access to accurate, timely data, we enable leaders and operators to act with confidence. This initiative operationalizes OKRs, linking strategic objectives to measurable outcomes. Ultimately, DAP is more than a technical solution; it’s a catalyst for cultural transformation, ensuring that every decision reflects our commitment to member value and organizational excellence.

 

 

GT Tskhondia
Senior Engineer  • Healthee

Employees can use Healthee’s AI-powered platform to access, manage and navigate health benefits, whether that involves setting up a telehealth appointment, finding the most cost-effective care or more.

 

Describe a project you’re especially eager to tackle in the new year.

In the new year, we’re excited to launch the next evolution of Zoe, our intelligent assistant that helps users navigate healthcare benefits with ease. Zoe has already become a core part of the Healthee experience; now we’re making her faster, smarter and more intuitive. The redesigned Zoe will understand complex requests, retrieve accurate information instantly and communicate in a natural, conversational way. This project focuses on transforming Zoe into a seamless, personalized guide that connects users to everything Healthee offers, turning complex healthcare questions into clear, simple answers.

 

“In the new year, we’re excited to launch the next evolution of Zoe, our intelligent assistant that helps users navigate healthcare benefits with ease.”

 

What technologies and/or practices is your team leveraging to tackle this project?

The Zoe redesign leverages advanced AI orchestration, LLMs and context-aware data systems to deliver fast, accurate and human-like responses. We’re enhancing how Zoe understands and processes user intent, allowing her to combine conversational fluency with real-time data access. Our team is implementing modular AI tools that optimize context, improve precision and enable scalable growth as Zoe learns. This architecture ensures that every interaction feels effortless while maintaining the accuracy and reliability users expect from Healthee.

 

How does this project tie into larger company goals?

Zoe’s redesign is central to Healthee’s mission of making healthcare simple, transparent and empowering. By enhancing Zoe’s intelligence and usability, we’re giving employees and HR teams a smarter, more intuitive way to access health information. This upgrade strengthens our goal of empowering better healthcare decisions while reducing administrative complexity. Zoe’s evolution represents more than a product improvement — it’s a key step toward creating a fully connected Healthee ecosystem, where technology works seamlessly to make health benefits clearer, faster and more human.

 

 

Responses have been edited for length and clarity. Images provided by Shutterstock and listed companies.