Em Daigle:
Alright. Welcome, everyone. We are going to give everybody just a couple of minutes to get started. Um, but as we're kinda waiting for people to filter in here, would be great to have anyone on the line um, drop into the chat here where you're calling in from. Uh, always like to see where people are in the world. I'm calling in from Boston today, um, but I always joke, have laptop will travel. Never know where I'm coming from. Alright. We got oh, been Martha's Vineyard, I like. DFW. Isaac, I was just there. I'm actually heading back there this weekend. Awesome. Alright. I like it. I like it. Columbus, Ohio. And Luba from London. Luba used to be on my team. I'm very excited to see her here. Alright. Awesome. We got people from, my gosh, all literally all over the world, which I love. Um, okay. We'll give it just one more minute here before we dive in. We've got some quick housekeeping to kinda go through, which should just take a moment, um, just to familiarize yourselves with, um, the SQL platform here. Um, there are some sections over on the right hand side of your screen, um, where you can access q and a. We do have some time at the end of the call where we will, um, save a few minutes to address any of those, um, as they come in throughout the session. That being said, please keep them coming because, um, we do have some folks moderating it as well. And if something does pop in and is particularly, um, relevant to the conversation at that point in time, I'll certainly work it in. So, um, please keep the questions coming. Um, we do have an hour here together. However, we do have an additional thirty minutes if the conversation's going well to, um, continue the conversation in a breakout room as well. So thank you so much, everybody, for joining us today. Here, I'm with Isaac Heller and Jason Berwanger. Super excited to have you both. For those of you who don't know me, I'm Emma Daigle. I am the chief, uh, automation officer at Automate. I founded the community to provide a space for accounting and finance professionals to explore AI and automation. So, uh, this is a particularly relevant topic for me and super excited to be here to help moderate today. Um, Jason, Isaac, would love if you could each, uh, give a brief intro as well.
Jason Berwanger:
Certainly. Isaac, lead us off.
Isaac Heller:
Yeah. Definitely. Well, first of all, it's good to see you, Em and Jason. And I'm excited for everyone who's listening today because, um, unlike other webinars you may hear or have been on, you know, Em and Jason have lived and breathed this space, like me. You know, we've all been in in the shoes of a finance professional and led a lot of these projects. So this is gold. Um, for me, uh, you know, I I spent some time in those projects and then ended up, uh, falling in love with the early AI models and thinking about how I could apply it to my experience in accounting and finance. I went on to found Trillion, and Trillion's an AI powered accounting workflow platform. And I'm sure we'll talk more about that today, but I'm excited to be here. Thanks for having me.
Em Daigle:
Awesome. Awesome. And, mister Birnwanger?
Jason Berwanger:
Thank you. Well, as a compliment to you, I'm I'm excited you're here. You're one of the few folks that I know actually comes with both that real world practitioner experience and serves customers and adds value using AI from the accounting lens. So I, uh, was super stoked to have you ensure those realistic data points with the audience, and, uh, glad you're here.
Isaac Heller:
Thank you.
Jason Berwanger:
Perfect. Uh, yes. A quick intro. My background, uh, little bit of hybrid between corporate accounting. Uh, I've held pretty much any role from revenue accountants all the way up to, uh, head of finance leading a number of, like, fintech and insure tech startups. Yeah. I, uh, got a chance to do a lot of technical work to data engineering and mostly automation serving accounting and finance. Uh, I've done seven ERP implementations, uh, disproportionately NetSuite, uh, and did an IPO in a little less than ninety days back in 2020 with a company called Root Insurance. Uh, the CFO asked me, hey. We have NetSuite, so we're ready for that PO. Right? And we we had, uh, Braintree and Stripe transactions and, uh, data in, like, eight systems, and none of them were NetSuite in the accounting team where and I were like, yeah. We have a lot of work to do to pipe this in. And, yeah, that was the impetus, the experience to start Hubify and to really solve some of those data automation problems, specifically for accounts.
Em Daigle:
Love it. Well, um, I think you both have an extreme amount of experience firsthand from a practitioner perspective, and both of you, uh, have taken that and then doubled down to actually found and create solutions for those in the space. So, um, super excited. This should be great conversation today. I feel lucky that I get to be here asking the questions. So, um, without further ado, let's kick it off. Um, let's start with the difference between deterministic thinking and probabilistic outcomes. I think, you know, we'll we'll get to where AI fits in there and and how automation fits in, but I think just, um, maybe, Jason, you can just kick it off. I think some folks don't necessarily know right off the bat what we mean when we talk about just deterministic thinking or probabilistic outcomes. So, um, would love if you could give it kind of, like, a first pass as far as, like, one, what are the differences. Right? And then how AI fits in and and automation fits in there as well.
Jason Berwanger:
Yeah. Sure. Happy to. So, uh, the way I always think of this, particularly from a a a finance perspective on the team between really accounting and FP and A, and I'll I'll oversimplify. Obviously, finance teams are much more diverse than that with treasury and and all kinds of internal audit. But when we think about that, uh, we're really talking about two different types of problems. There's problems that you need to solve and that is has complete provenance and lineage back to a source of truth, and you could prove exactly why you knew what you were thinking because there's a provable element of truth. That's the deterministic and and frankly what a ton of, uh, accounting needs to get immutable, never changing point in time data that you can prove was correct at a point in time. And it would, uh, it would uphold the standards of an audit and our accounting compliance standards of, you know, both IFRS and GAAP. Um, and that's really what I think about determinism. I think of I could find the truth that's out there. You know, it'd be easy to get, but I I could find it if I had it. Probabilistic goes into more of the FP and A and more of the analytics space where it's a, uh, I can create a range and create a forecast that within a certain range of probability of five, ten percent, I'm gonna be 90% correct. If I wanna get 1%, I may be 60% correct because I I really don't know, but I'm making a prediction of of the future. Um, and then the last bit when I think about that is also just like stochastic outputs. You see this a lot with, like, hallucinations with AI, and there's a random output of, you you know, in some sense, you're dealing with a chatbot, and you're you're gonna get a random answer as if it's a person who's gonna give you a different answer based on the same trained data. Um, and so probabilistic and stochastic, in some sense, kinda go hand in hand in in how they are performing when solving problems. Um, but most of the time when we think about those things, we're really thinking about, well, what's the problem we're trying to solve? And then we back into, well, what's the right solution? Do I need something that helps me get a deterministic outcome? And is that even possible with this problem, or is a a probabilistic outcome the right one? Um, and so from from my seat, when we think about customers that we serve at Hubify and my experience in, like, uh, PLG business models where you have contracts in your CRM, like a Salesforce or HubSpot or maybe even Stripe, You have a billing system, and you're in billing and dunning and, you know, credit notes and all the events, and then you have a payment system. There is a deterministic relationship between them. And the right way to solve that in many ways is to say, well, great. I'm gonna connect to those source systems. There's an interrelationship between them in terms of how contracts are gonna relate with invoices, invoices payments, payments to disputes. And then at the end of the day, if you automate it in that way with those relationships in mind through that accounting context, you get to a deterministic outcome where the reconciliations and anything that falls out is totally deterministic as well as the output. You can validate exactly why you count something at a point in time. That's a great example of where, you know, AI gets a lot of the marketing share and and, you know, uh, they get a lot of the excitement today on LinkedIn. But at the end of the day, I think when you think about the future and what tools help folks, it really, you know, goes from workflow old school software. We have a human being doing the calculations and the hard work and then, you know, putting a journal entry into NetSuite. Well, that future really looks like, you know, hey. You can deterministically solve that in a subledger using, uh, the context of an organization's accounting policies built into that automation. Well, then you can automate your close and get to a daily close output. Conversely, though, there are plenty of problems where we thought think about AI and some of those more probabilistic outcomes where, uh, you may think about FP and A and the range of predictions is a very common one. Uh, but, you know, I I know, Isaac, you and I have talked a lot about this at length around when we think about the depth of variability across contracts, uh, being able to say, hey. There are a number of data points and elements around each contract, performance obligation, the term, you know, the termination, uh, availability of a contract, being able to grab that and really take what OCR tried to do and promised us fifteen, twenty years ago, and then layer on this more, uh, you know, uh, OCR two point o as I joke. But Isaac was like, yeah. Only half the people are gonna get that, but I'm gonna use it again.
Jason Berwanger:
I feel like that's that's where, uh, ultimately, this goes is you you can layer in the first layer of scanning and then have the agentic capabilities to then go rescan and revalidate a form with the double checks that a human being can perform. Ergo, you get out what are the core requirements to then deterministically solve what should be the six zero six requirements. Um, So a lot of these, you know, end up playing together. But, um, I think those are probably two of the the best use cases I can think of in terms of where deterministic solutions apply to the problem and where more of the probabilistic solutions apply and and add value to accounting and finance teams.
Em Daigle:
Yeah. Definitely. Isaac, anything to add to that or any others that, like, jump out at you from your perspective and maybe with some of the customers you're working with?
Isaac Heller:
Um, Yeah. To some extent. I mean, like, I I think Jason makes a really good point that you really have to, in most cases, decide whether it's a deterministic or a probabilistic kind of process or outcome you're trying to achieve. But there are cases where it's going to be a mixture of both. Right? And you're even seeing systems coming together and adding both. So, you know, like, in the in the contract example, let's take a a pile of contracts, revenue contracts, and they may have nonstandard terms, right, or leases, and they may have some sort of fuzzy terms. You might ask questions on that data using AI, and those are probabilistic questions. Like, are there nonstandard terms in these contracts? And it's thinking, and it's like, probably. Right? And I can tell you exactly what it says, but it's like, probably. Right? And then you ask, well, what types of terms? And it says, it's probably, you know, cancellation clauses. Right? And then you keep going deeper with those probabilistic questions, and then you hit a point where you're like, okay. Yeah. I'm a revenue account. I'm a lease account. I'm a technical accountant. I need to see this. And then you just zoom in, and you have your deterministic kinda ground truth view of things. And if you think about that whole process, you have to get good at understanding the difference between probabilistic and deterministic questions. Because if you go too far one direction or another, you're gonna overlook something. And that's ultimately what Jason and and, you know, companies like like ours are trying to avoid is overcommitting to one side and then ending up being less accurate. You know, the goal here is is obviously to be more accurate, more efficient. But if we move too fast or we don't move in the right direction, we end up, um, really embarrassing ourselves a little bit based on based on the personalities of people I've worked with. Yeah.
Em Daigle:
Yeah. I think that that's a fair point. And I think, you know, all of the automation piece has been there and evolving. AI more recently. But there is sort of this underlying risk that has sort of always been there, but it's probably even, um, more amplified by AI and and what we can do. So, like, the whole garbage in, garbage out. Right? Bad data in, bad data out is amplified by AI, especially because when you're putting them through something with whether it's deterministic and or probabilistic, um, you need you at the end of the day, we still need to make sure that what we are um, using for data is complete, accurate, and and has all of those controls around it. So, you know, we always talk about what AI can fix or what good use cases would look like, but I'm particularly interested in maybe diving into a little bit around what AI can't fix, right, like, with around accounting. And so curious, Isaac, um, what are you kind of seeing as areas that maybe we need to make sure have a few more guardrails around or, um, make sure we have all those proper controls in place so that we still have human in the loop? We still can provide confidence in the numbers that we're reporting at the end of the day.
Isaac Heller:
Yeah. Well, I think it's I think it's telling that we're asking questions like what can't AI fix because you're right. Like, it's it's out there in the air, and it's all over our feeds and our our news. And it's like, we assume that it's this magic wand that that can fix everything. Yeah. I think most of us know somewhere deep in our course that the same problems we had yesterday are the same problems we have today and tomorrow, which is which is, you know, clean and accurate data. Right? And so before we get into AI, like, the way the way Jason and I have talked about in a recent conversation is like that food pyramid view where at the bottom is the core data, the record keeping, the transactions, and you just start to kinda go up. Maybe in the middle, there's some, um, translated data or workflows or automation, maybe, like, some rev rec data. And then at the top is insights. Right? And I think the promise of AI in finance and really in the in the industry is unlocking insights. Meaning, you can start to play and noodle on questions that you couldn't ask before. Like, I'll give you a a funky example. But, you know, if you wanna if you wanna do revenue forecasting, a lot of people look at that as, like, an FPNA project. But what if you wanted to, like, forecast based on six zero six? Right? Like, you actually wanted to ask what would the because everyone we forget that. Like, people do payroll forecasts. They do, like, finance based. But what if you actually wanted to do an impact? Well, you would need the whole hierarchy of clean data to be able to get that level of insights. Um, and so just kinda working backwards from, again, AI, the holy grail is unlimited insights. Any question I go at my fingertip, um, going back down, the the data needs to be right from the bottom. So, I mean, Jason and and the team of Hubify are obviously experts in the area of revenue and revenue data. There are multiple layers of, let's just call it, cleanup and translation that need to be done just to make sure that you can query that data from the top in an AI model. Um, and I'm sure we'll get into some more examples. But I think just just to kind of level set, the the same problems we had yesterday are the same problems we have today, which is garbage in garbage out. It has to start at a foundational data level, and then you can get to talking about what the fun AI use cases are.
Em Daigle:
That's a great point. Very great point. Um, Jason, from your perspective, I know the data aspect is is huge. Um, when you are thinking about the automation aspect of it and compliance, like, what are you what kind of advice are you, uh, giving to your prospects and customers as they're thinking about, like, what they need to do to get their data ready to be able to leverage something like Hubify and be able to unlock everything with the automation and then the AI aspect of it?
Jason Berwanger:
Yeah. Yeah. The the most common question, you know, we end up fielding becomes almost like why hasn't there been more adoption related to AI and automation in both corporate accounting and FP and A finance? And I think the answer is there there is a a core dependency here from a data pipeline perspective. Uh, at the end of the day, these are LLMs, and they need to be trained on data. And more often than not, if I'm talking to a customer who's like, hey. I have a CEO mandate to use AI in my close to get it done super fast. Right now, their close consists of the their accounting team gathering a bunch of disparate data, translating that into atomic high level translated data that is now compliant because an accountant has actually translated this mutable immutable data into journal entries. And then once every forty days, call it, with a, you know, average ten day close, they're putting high level information into your ledger. Exactly where is, uh, someone going to be able to train an LLM to be able to get the contextual information enough to be able to get the answer that actually adds value? And the answer is if your FBNA analyst or your accounting team can't do it, it's highly unlikely that the prerequisites are met that an AI or an LLM could do it, um, which is a long way of saying that food pyramid is a a real set of dependencies and building blocks, uh, and it's very foundational to data engineering and the same kind of paradigms we saw with, uh, analytics and data science when we came to, hey. We we really wanna predict these things. But if we wanna predict these things, we have a I have to ask that the data that's, uh, highly atomic and completely correlated and contextualized. And so, like, when someone says clean data, well, it depends on what you mean by clean data. Like, if it's just, like, a totally normalized data set with no context, well, then even if you have somebody that's a triple PhD or an AI that is the equivalent of a triple PhD, if they don't know of context, they're gonna ask a lot of follow-up questions or not be able to answer your question without more info. I think that's also true related to this concept of, like, AI to to automation, and there is, I think, a very clear dependency there. Um, the exciting part for us, though, is we you know, we're trying to develop the product in a way where we wanna support the accounting team to automate the deterministic core solution so that you can use that data and actually egress that safely into LLMs and train them to then get the analytical benefit. Um, and And so we're really just, you know, trying to help build the foundation so that way those outcomes are actually enabled now that the dataset and and I will say the harder problem, but that different deterministic problem has been solved.
Em Daigle:
Yeah. And thank you for pointing out. I think oftentimes when we think about data, data cleanliness, people I and I know myself included sometimes just goes to, okay, is it complete and accurate? But to your point, like, what like, is it structured? And if it's structured and it's already in fields and it has a value in that field, how then does it get the right level of context to be able to understand what is needed and what should be in that cell or in that that field. And so I think there's a lot of the data consideration goes far beyond just does each field have a a value in it? But what should the value be? What should it look like? Where does it come from? Does it fall within certain parameters that it needs to? Like, I think that that whole data readiness theme gets distilled down so easily to just, like, a few words when really it's a it's a very large systemic problem, um, which is why I think, like, as I go into, like, our next question here, I kinda let I'm probably gonna lead the witnesses here a little bit, but there's this idea that accountants will no longer be needed because of the advent of AI. I think we all know that's not actually true, at least for, let's call it, a hundred years. That being who knows what's gonna happen in a hundred years? That being said, for now, though, there is such a huge need for accountants, and I think I want each of your perspective on this. But it's even things like we just spoke about where the human in the loop does need to be there. The expertise that accountants have, I think, can provide so much in the way of the context and or tweaking of AI to be able to learn in the right way. So curious from your perspective, though, and I don't wanna put words on in either of your mouse, but I just got off another topic so so close to this. What's your thought on where AI and automation can maybe shift or evolve the accounting community as opposed to displacing it completely? Maybe, Jason, if you wanna jump in.
Jason Berwanger:
Yeah. Sure. Yeah. I I think when we think about AI replacing accounting, that's the wrong way to think about it and not an outcome that I think is is highly realistic. And part of part of the way I I like to think about this is when you think about the building blocks of accounting in the process, you're really trying to understand an organization. You're designing policies around that organization to then build, uh, highly compliant and complex reporting and outputs, which is really interpreting vast sums of operational transactions with those context of both the organizational and the accounting context. And then, great. I'm now gonna output what is clean financial reporting we can use for external and ideally internal if if folks are smart enough to do it. Um, so to me, the the the shift there is I think it actually makes the accounting profession and people more important because it's very much a governance exercise. And Mhmm. That governance exercise should be led by those experts. And there's nobody better in an organization to know how to translate operational performance into financial data than the accounting team. Now if you if you roll, you know, the five years forward and you look and say, from a financial work, what's gonna get replaced? Um, you know, I I do think there's realistically disruptions when it comes to, like, AP and AR. We've seen the OCO or OCR two point o of, like, AP contracts or purchase orders and, uh, invoices from vendors come in, could be scanned and and automatically matched. There's a lot of folks who had those roles that I think, you know, there's obviously a different part in organization, a different value set that they'll add where, you know, that matching could be done, um, to a degree of efficiency and accuracy that it would make sense to to do that. Um, you can basically scan the docs, find the PO, and the amount match, and then the exceptions go to one person instead of 10 people running through those stocks. Dunning and, like, billing invoicing follow-up or answering questions on, uh, you know, individual, um, invoices or contract for customers is another great example because that context of the invoice and the and the agreement can be scanned by an LLM and answered more efficiently and potentially more completely than an individual could. And it gives a better level of customer service and arguably creates value for the end customer for the business. So that feels like right for disruption. Um, last but not least, the the FP and A bit, where a ton of FP and A and and, you know, I speak from my own seed is what a lot of what I did in FP and A is take the accounting close, aggregated data, and reclose, but do it on specific segments to break up the p and l on the balance sheet to figure out what was working and what wasn't. And a and a ton of that data, if you solve both the underlying data and then you, you know, put an LLM on top of it to find and recognize patterns and outliers and changes and find opportunities, well, now that FBNA person goes from, well, I don't spend my time putting together the analysis. I spend my time looking at the results and the insights that came out of this and actually improving the business. So it really takes that back back of house, back office view of FP and A and really elevates them more into the, well, no. That's, like, your best strategic partner who's really your your finance business partner to help your different operational groups understand how their actions manifest with financial performance. And they can do that if they're not trying to put together the analysis for, you know, how the management reporting package go last month that they got twelve days after the close. Um, but there I think there is some core disruption to what an FP and A analyst means today and what it'll mean in five years. And I think it's a lot of those are good things and valuable things for both the end customers, the business, uh, and the associates. So that's my, I think, take on that as accounting becomes accounting becomes more important, but the disruption will happen a lot in the finance operation and and FP and A type of space more so than the accounting space. And the context of the organization, the policies is really the key is, otherwise, to try to use LLMs to solve that, you'd have to expose every stick of data in your organization and everything that is proprietary out to some of these LLMs and then hope that they got the context right in a nondeterministic way. And I don't think folks are going to be willing to do that. It's my, uh, not so short answer.
Em Daigle:
It it's a great answer, though. Isaac, what I mean, is there anything to add?
Isaac Heller:
No. No. It it's an important it's an important topic that we discussed. I mean, for first, we have to recognize that there is change in the air. Before we get to AI, the the actual profession of accounting and the labor dynamics have been changing and continue to change. So CPO CPA enrollment is down. Right? Yeah. Um, you know, I remember there was a Wall Street Journal article that said something like 300,000 accountants left the profession between 2020 and 2022. 300,000. Right? Many of us who are on finance, accounting teams, or work at audit firms are struggling to fill open roles. Right? And, um, I know we all wanna get paid more, but I happen to think accounting and and audit jobs are decent paying jobs, right, in in America today and and really globally.
Em Daigle:
Yeah.
Isaac Heller:
And so we have to admit that there's some just decreased motivation and and and even sexiness of the profession. Whereas when I started, it was like, oh, you're a doctor, an accountant, a lawyer. Like, these are the, you know, these are the these are the things you go to school and get a degree for. And so we just have to remind ourselves of that. And then as as someone who went through, you know, gap compliance exercises and contract reading and sampling and everything like that, I can tell you I didn't love it. Right? And and I can tell you I loved much more speaking with clients, speaking with different people across the business, um, having an idea and coming up with an ad hoc analysis to execute on that idea. So we know there's this challenge. We all know deep in our hearts that we wanna do the more strategic exciting stuff, maybe spend a little bit more time with our colleagues than our, you know, do it double screens or whatever it is. And that's that's kind of things where you're going. I wanna echo what Jason said around this idea that if if accounting, uh, teams and finance teams can embrace AI, they have a leg up in terms of being able to build and even rebuild their organizations in a certain fashion. If you think about AI, I mean, you know, as a as a CEO of a company now, I look at what AI is being procured across all of our businesses. Right? And marketing's very pick to, uh, quick to buy their AI content tools, and sales is doing all their AI call recording and all these tools, uh, customer success, their ticket analysis tools, and all that stuff. Well, me, in my days as an as a finance person, I'm the only guy who can connect sales data, marketing data, CS data. I can see the full picture. That's my role. Right? And so if I could think of how I can leverage AI, then I can apply that to, really, the whole company. Right? You could start with a string of your your go to market, your your commercial side. You could do operational stuff with all of that AI and the data at your fingertips. So it's kind of a huge sort of almost like a boomerang or or or resurgent opportunity, uh, for for AI. And I'll even I'll go a little bit further because I you know, I'm I'm reminiscing. I said I was I was from DFW earlier. Yeah. I was doing my rev rec project at a pre IPO company in in DFW. The company was called, uh, Sabre Holdings, which is a travel tech company. It's the, uh, the crappy old green sprig street street from IDEO's airlines and stuff. But the reason I got there is because I started my career as a travel agent. Right? A a folder and paper and creating itinerary travel agent, and I was a card carrying IATA member. And this was a time in the early mid two thousands where travel was starting to go online to Expedia and to these other areas. And I've been a travel agent when all these things and all these winds were changing, and people are really worried about how they they operated their jobs. And all of my friends and colleagues, older than me, younger than me, who started out as travel agents, some are still travel agents, and they do more specialized type of travel that nobody else can do. Right? And others, and most of them, are rising stars at Kayak and Expedia and Sabre and Amade and Google Flights and ITA. And so the travel techno the travel industry shifted a lot more to a travel technology industry, but people that had a passion and interest and a love for travel became much more technologists. Again, I'm just looking at a ten to fifteen year, uh, trajectory. You know, the guy I kinda came up with is a VP at Priceline now and is just a a guru at all this stuff. And so I just I I I love that I'm slightly old enough to be able to see an arc of innovation come into a space, and people just follow their passion and their expertise in accounting or travel but become much more technology oriented. And so, um, you know, labor shift, big opportunity, don't be afraid. You know, it happens all the time.
Em Daigle:
I that's a great answer. Um, very inspiring. Also reminds me, I'm old enough to remember these things too. So thanks for that. We did get a question that came in that I would love to throw back to you as it's based on kinda like what you just said. So the question is if humans are required to validate data in the AI process or workflow, are they not switching from one type of tedious or manual task that they're doing now to another, what would be replaced in but in the future by AI. Just curious based on, like, your how you just kinda gave that example of the evolution in the, um, travel industry, right, and, like, how technology does take and transform the space. Curious your reaction to that right off the bat.
Isaac Heller:
Well, uh, there's a there's kind of a a volume a reduction in the volume of time spent related to data entry, and then there's also a change in the energy of the type of data entry you're doing. So, you know, instead of reviewing a 100 contracts, um, over, I don't know, hundred hours or whatever, you may be reviewing a 100 contracts over ten hours. So you get ninety hours back. But more importantly, the reviewing that you're doing, instead of looking at and then typing it in, which can become tedious. Right? Fat fingers, every you know, feed everything. The the contract, um, and and Jason brought up this example earlier. The contract or the invoice is kind of fed to you. You can review it quickly. There might even be visual highlights of the areas to look for. I can tell you in the Trulian interface, we're very oriented towards approval workflows. And so if the AI has, uh, auto populated a field, it has one color. And if it has, um, kind of use probabilistic reasoning to put in that field, it has another color. And then if it's under a certain threshold, even though it may be pretty sure it's blank, but it has a box around the area where it may be there. Right? And so if you just think about our nature in in work and, you know, hopefully, a lot of us on this call have started out in a more data entry role and then maybe moved up in Notch or two. It's such a better feeling in your workday when you have someone coming to you presenting something they found or they're looking at or they're considering, and you as the the manager or whatever get to say, yep. I think that's right. Or, no. Actually, you should look there. I think that's a lot that that gives you a lot more energy back. So you got the time savings of looking out the contracts, but also devoting your energy to review can lead to much more accuracy than devoting it to to data entry.
Em Daigle:
Totally. Yeah. But, Jason, do you have anything outside of that that you kind of wouldn't wanna respond to that?
Jason Berwanger:
I'll, uh, I'll I'll take the bait with, uh, Sebastian's question in some sense. And I I will say, I I think what's almost implied in this question is there is a zero, if not a net negative sum game for a number of AI projects today. And I don't think that's the case with a lot of the value that Isaac and team deliver and the testimonials that I've I've heard and seen. Uh, I do think that that that is commonplace today with investments in pilots for AI. As a matter of fact, there was just a study done that came out today that, like, ninety five percent of pilots end up, uh, failing to to meet the objections when it comes to AI. And it's almost like these AI pilots are the the new finance transformation investments where, you know, the best finance transformation ever did was get a 65% chance of getting the outcome you wanted initially, which was like trying to apply ERP to decentralized business models. It just doesn't work. Um, I absolutely think that's something to to think about and and be mindful of that it's not a a net net net net zero trade or a net negative trade. Um, and I and I think with the, uh, you know, at the end of the day, we we think about, um, the cost of AI and some of these LLL models to do certain tasks. And right now, we're asking a lot of LLMs and AI to do things that are both negative trade offs from a risk perspective, and they're negative trade offs from a computation output perspective where you're using LLMs to answer a $50 question, and it's costing, you you know, they compute, you know, $550. And, uh, it's it's a not so sexy future where you have to be careful, though, that doesn't become, you know, the next AWS where you get everybody, you know, addicted to the pricing model. And then, uh, you know, there's a critical thinking erosion, which I see Sebastian just asked about. And then, oh, by the way, now these basic tasks and the cost of ownership for the business is out five x what it was when I had, uh, Brenda do it, who has been doing this for twenty five years, and she knows what she's doing because Brenda's the most efficient person on Earth. Um, I I do think there is something to be wary there, and we have to use our critical thinking to make sure that we apply that first principles approach. And problem has to has to meet the solution there with a clear definition of success. So far, uh, this is very much emerging tech, and most pilot pilots fail to deliver any incremental value. And you're not gonna see on an overall company and country perspective some incremental GDP from AI. You are gonna see success stories where folks like Isaac take very specific problems and use very specific technology to solve them. I think those are the folks to focus on and and be mindful of that it's, you know, has a has a useful, uh, amount of value to add. And And I think, you know, follow-up on Sebastian's follow-up, is there a risk that AI erodes accountants' critical thinking? Yeah. Totally. You know, you're you're you're using a tool that allows you to get answers quickly with, like, instant gratification and doesn't allow you or require you to do critical thinking and doesn't care whether or not it has the entire context to give you an answer, that's kinda dangerous in my opinion, which is why I stand by my point around accounts being more important than that governance aspect being more important in the next five to ten years, even more so than it was historically, um, and why I I don't think AI is is the right solution for that given the trade offs from a cost or a risk perspective.
Em Daigle:
Yeah. You know, it's interesting too. I was thinking about that. And I almost think of Sebastian's first question into the second one where I think of some of those repetitive tasks. And I'm just gonna use one example of, um, um, reviewing, uh, an order that got entered into the system. Today, our current manual process might be that somebody's entire job is to look at what's been entered into the system, what is on a PO, and literally, like, does it all match? And that over time can is tedious, quite boring, and can lead to, um, missing something. Right? Because you're literally just, okay, it matches. It matches. And over some point in time, you're like, okay. All the numbers look the same. Whereas what AI might be able to do and help with is pick up like, that actually, let me back up. That first process is to find all the needles in the hay What's wrong and then fix it? Basically, I I can give you the here are the needles. Now you go fix it. And so then to your point about critical thinking, um, Sebastian, is then maybe that critical thinking doesn't become numbed in the match match doesn't match game and instead actually having to every time you're addressing a mismatch, you do need to use critical thinking to say, like, okay. What doesn't match here? What is the process that needs to go through that needs to be be done to fix it? Right? So I think there's probably arguments on both sides and certainly risk and one that we all have to be, um, cognizant of for sure. Um, but I think just a lot of opportunity as well. And then speaking from my own experience, I you know, all day, I get to basically learn about new technologies, talk to new founders, figure out what's out there. There's so many new things. Um, if anybody's heard me speak before, like, the barrier to entry feels lower than it's ever felt for new technology to sort of come to the forefront, which is really exciting. However, I'm kind of curious from your perspective, each of you, um, and and maybe, Jason, we can start with you. But how do you parse through, like, what's real and what's not? What's legitimate and what's BS? Like, how how are you kind of thinking about evaluating maybe even different tools for for use on your team and what that looks like?
Jason Berwanger:
Yeah. It's been it's been interesting because I I feel similar to Isaac when it comes to, like, the marketing and sales tools. Like, those were, I think, really early easy adoption, um, mechanisms to make sure AI was part of the team. You know, there's also then, uh, anomaly detection. I think that becomes critical from an engineering and technology perspective, as well as in our own technology to make sure that we're, you know, finding those needles in the haystacks so that we can figure out what best to do about them and highlight them. Um, so I think we've got some good gains there. Um, I I I think there have been some interesting downsides. Like, I mean, the fact that Isaac and I both have the one and two with sales and marketing ad tools on the list, that has led to a proliferation where it's now, I think, almost too easy, where it's equally easy for anyone to market and sell and say they can solve a problem from a marketing sales go to market perspective. And I I think that that is leading quickly towards people realizing that, uh, I'm not exactly sure this testimonial is real that I'm, you know, reading on the website or, you know, these upvotes are really maybe just paid upvotes. And, um, and I think people are falling back to, I'm gonna ask communities of fellow accountants, etcetera. And so I think it's two things. It's it's made community and getting feedback on a lot of those important long term partnership decisions more important than it has been in a long time. And I think it has, uh, commoditized a lot of the day to day selling and marketing to probably the detriment of the recipients of the sales and marketing to where, you know, if I could rewind, I I would like the the old model of selling and marketing, but you have to you have to play. And an example of that is, like, I probably get eight to 10 unsolicited LinkedIn messages a day that are clearly, like, AI come chat and probably a 100 emails. And, um, it was probably true, let's say, three years ago, then the folks that would break through to get into the inbox, you'd sometimes find the needle in the haystack. This is a real person with a real perspective on a problem that I absolutely have. And, like, I I no longer even can check to find that person occasionally in the inbox because it's too noisy. So, um, I think those are examples that, like, from our perspective as a company and minus professional of how we've leveraged AI and what it's done and and has it done. So, definitely, the anomaly detection, I think, is a a key benefit. Um, I will also point out that, you know, the same paradigm we're talking about here from an AI perspective for, um, some of these anomalies can also be applied here. Like, a great example is we have a number of alerts and detections within Hubify around, like, Stripe orphan payments or PayPal orphan payments, where there's just someone collected dollars, and we have no context as to how or why that happened. But what we can do is we can look at the context that we do have from an invoice linking and matching and a contract linking and matching and a customer linking and matching and say, okay. Is this just a wallet transaction? Or is this, like, not a wallet, not an invoice? I don't know anything, and we need a human to make a decision on this. Um, and that, I don't think, is a good use for AI and LLMs to do that. I think that's more of the deterministic. You can you can, uh, discern through a set of rules and intelligence to figure out exactly, in a repeatable way, what makes up what is an orphan payment and define that. And the accounting teams help define that. And then anytime that criteria is met, you can solve it. And there's obviously then a root cause of why did that happen in the first place, and then how do we get that into the out accounting automation layer. And, again, I don't I don't think an LLM would be a good good use case for something like that. Whereas, more broadly, anomaly detection and engineering and code and and where there is a lot less determinism is, uh, you know, pretty highly useful.
Em Daigle:
I like it. Isaac, how about you?
Isaac Heller:
Yeah. I mean okay. So a couple things. You're you're talking about, like, just deciding on AI tools and cutting through
Em Daigle:
the noise. Yeah. There's like and, like, what's real, what's not. Like, what, you know, there's just so much noise. It's so hard to read through.
Isaac Heller:
Look. Um, the people behind the technology matter. Right? You know, Jason and I have lived and breathed this. Right? And we we we, uh, I can speak for myself. I desperately want our customers to have a great experience with their accounting software. Right? Um, and we hire people who have similar passions. Right? We are surrounded by people who have lot of experience as CPAs, finance leaders, big four, all that stuff. A lot of us have spent time in product organizations or even financial products. And so I do think it's fair when you're evaluating solutions to see who the leaders are, the founders, the people behind the scenes, get a feel for that, um, salesperson or solution person. And, you know, I know someone like Cody, like, has a lot of industry experience. Like, that's the experience you're gonna get. So the people behind the the technology and the company matter. I think there's a second thing I just wanna say because it's kind of interesting, and Jason made me think of it. There's there's still a lot of questions around the durability of these AI businesses. Right? Whether they're venture backed or subsidized or or whatever, you do want a partner that you can have for many, many years, not something you're gonna have to switch in a couple years. And it's funny that that Jason brought up the AWS computing costs. Let me kind of explain to you what's happening in the world of AI technology right now. I'll use just one weird example. So the most commonly used AI engineering tool, one of them or two of them is called Cursor and then GitHub Copilot. Okay? And those two coding tools heavily rely on an LLM, one in particular called Claw that's owned by Anthropic. Right? It was recently uncovered that over 1,000,000,000 of Anthropic's revenue is coming from GitHub and Cursor, right, who have less than 1,000,000,000 in ARR. Okay? So then one of AWS's biggest customer, maybe it's maybe it's another Azure or whatever, is Claude. Right? So you've got this AI engineering application paycom tow backed by billions of venture capital, paying to an LOM application, backed by billions of venture capital, paying to Amazon, which I believe is profitable. Um, and so so if you just just look at one big picture, there is a lot of that wrapper wrapper type technology. And at some point, these companies are gonna need to, uh, have a durable business model, and they're knee gonna need to hire, um, customer support people and salespeople and education about the product, and they need to scale their product as they get to bigger companies and more complex scenarios and invest in innovation. And I don't have the answer here, but looking at the people behind the company is really important and getting a sense for the durability of the company. You know? Are they doing their SOC reports? Are they, you know, are they overvalued? Like, a lot of times, you know. I don't know if you heard stories like scale factor, and I think there was another recent Recent.
They're popping up constantly now. It's bench. Yeah. Bench. They could rise and fall. So I I do think, like, durability is important. Hey. If any customer prospect wants to talk to me, I know Jason feels the same way, call call us. Call me and learn about it, what we're our business and what we're trying to do here. So, um, those are just a couple of nontraditional ways to evaluate the company.
Em Daigle:
I I think you're spot on, and I think they're all really relevant and helpful pieces of advice as we all try and figure out, navigate our way through the craziness. Um, I I was gonna say month to month, but I think we should say week to week at this point. It's not moving so quickly. Um, so I had one more question, but I actually I see a couple of others in here, um, that I'd like to get to just given, um, the time and one in particular because it gets so many votes. So I have a feeling a number of others have the question, um, as well. So, um, if you guys are cool with it, I'd love to just kinda, like, jump over to those, and then we can, um, wrap it up and perhaps continue for the breakout rooms after. But, um, so this one goes back to the question early on, but, um, this is around clean data, key ingredient to successful ongoing implementation systems. Right? Like, whether it's AI, automation, whatever it is. But can AI itself be used to help clean up the data before AI is fully implemented or even maybe even for automation? I'm gonna tweak that a little bit and say AI or automation. Like, can we use AI to help do some of that data cleansing and or cleanup, um, in advance? Isaac, you're you're smirking. I'll let you
Isaac Heller:
go first. No. I mean, I Jason and I are complimentary here because Jay Jason's gonna know really well what data you're looking for and how you want it to look, right, in those scenarios. And I I I do think there's an application of AI and only because, remember, figuring out what data to put in a system is is is still allowing you to have clean data. So if you're using AI to sift through large amounts of data to learn about that data a little better, um, that's okay. And you could actually even use, like, Gemini on a Google Sheet or Copilot on Excel to ask more questions on that data. So I think it's perfectly appropriate. But kinda kick it over to Jason because I know for sure you need to know exactly or pretty damn near sure what you want your data to look like before you get into that AI.
Jason Berwanger:
Yeah. Yeah. I mean, I think the the cop out answer is it depends, uh, but I'll I'll go further. So clean data, uh, definitely an ingredient, but I think it goes beyond clean data and more into, like, contextualized data. Um, and so if if your goal is, for example, to have a an AI FP and A analyst dataset that can surface insights for you, then, uh, no. I don't think AI itself can be used to help clean the data. If by clean, we're talking about fully contextualizing the business model, the customers, the history, you know, all of the knowledge of those professionals along the way and the governance that it takes to do that from an accounting and finance perspective? No. I think those are human in the loop opportunities, and that's very likely a a foundational automation project to, um, get the data in a highly correlated normalized way that is ready to be trained by LLMs and AI. Now are there, uh, specific tasks along the way where AI could be useful to clean and, you know, really wrangle and scrub some of that data? Absolutely. It can, and you'd be silly not to use those nowadays. And I I think it helps a lot. But when it comes to building a rigorous and repeatable system, obviously, the the the degree that you rely on AI without the full context to do that versus a human as you build those governance, you know, data layers to get that normalized data out to train, no. I I don't think that's gonna work. Um, and I I a way that I always think about this, and I think it's a it's a there's a common example that we run into with, like, CRMs. Um, oftentimes, like, we we think about a CRM and we say, okay. There's a CRM and it's got a number of contracts, then eventually that goes into billing and payments, etcetera. And, um, often, it's like, well, couldn't couldn't we just use AI to then just, like, scan all the contracts? And and then, you know, that'll just figure out the six zero six and the billing, and, you know, the problem just goes away. And the question I always wonder is, like, a, how how predictable is historical performance of future events? And if the answer is not super predictable, well, what that's pointing to is that if you have, you're inventing a new discount code every single day from your sales folks, then an a AI LLM isn't gonna save you. Like, the accounting team's in hell, and just like the LLM is gonna be in hell, and someone in account is gonna clean it up. Um, the answer to that is, like, we should build proper abstractions that make sense for our business that govern how we should have certain discount types and terms. And, you know, there still may be some opportunities to scan for certain termination clause, and AI may play a part in standardizing the output from those. But the real first value prop is standardizing and abstracting what you should and shouldn't do in your contracts and your CRM. So that way, your revenue operations team ends up fundamentally, let's say, down into a clean, um, dataset that allows you to bill and contract and perform, you know, whatever your service or good is for that company. Um, that's likely where 80% of the value is gonna be. And then there's the 15% jump from, okay, scanning all of the independent variables of these contracts that really, you know, are highly variable within a certain set of discounts or terms or forms obligations or terminations where AI can help then codify those into the datasets that the determinism, like billing and accounting, actually need downstream. Um, so I do have a strong opinion that I I think that when when you think about an example and a common one that I peep think people have, if you just start with the AI and you ignore what the actual problem is, it's like, yeah. But it is what you're doing historically predictive of future results. And if not, then no LLM's gonna save you because that's what it's doing is it's learning from a lot of those prior transactions and and what little context it has, and then it's trying to give the best answer possible. Um, and so if you would ask a human to do that and it would be hell, then it would likely be hell for the LLM, and then the human trying to audit it is my overly opinionated answer.
Em Daigle:
The only difference is AI may not give you the same eye roll that, like, maybe
Jason Berwanger:
Well, it gives you a wink on Tuesday and an eye roll on Wednesday.
Em Daigle:
Yeah. Exactly. There you go. There you go. Um, awesome. Alright. One more, um, and then I think we'll wrap it up, um, for the masses. But for those that wanna keep going, we can jump into a a breakout room. But, um, this one I like because it's kind of, um, focused more on maybe, like, the process. Is there a recommended process? Like, should companies automate certain processes first? Should you automate then do AI? Is there sort of, like, a recommendation to how you might start to adopt these things, or do you wanna just, like, jump into AI in certain cases? I know there's probably a bit of a it depends typical accounting answer. Um, but that being said, um, Jason, how would you sort of answer that from, like, a the best approach in in how to start adopting both automation and AI?
Jason Berwanger:
Yep. The I I think the answer is neither. Uh, the first thing that you you should do is is understand what is the value of automating and or solving this business problem with AI or automation. And that then gives you a very clear path to is the trade off worth it to then invest in AR automation and make a good trade off for your business that actually adds incremental business value. And I think one of the reasons why that 95% is out there is because of the idea that their CEO mandates around AI adoption, which don't really somehow directly or indirectly tie back to business value. It's just this aimless, faceless, try to gain efficiency by trading hours and headcount for AI. Uh, and I feel like that's a losing proposition that that, uh, that divests from the first principles thinking that we all should and that I think finance and accounting is great at is, uh, you know, what is the what's this problem worth if we solved it? And is there value in having more real time data? And if we did have more real time data, let's form a hypothesis around the opportunity cost. I think at that point, now you've got the right right place to start. Um, and if if the problem is, yeah, I've got the data I need, and I don't need to specifically prove the answer that I got. And just getting directionally correct data, but more often, AI might be a great place to start. If you're like, well, no. I've I've got a bunch of different systems, and there is a deterministic relationship between the two of them. And I've I've got one of these clear accounting. Hey. I need to get my close to a daily basis for these key metrics so that I can share that with the rest of the teams on how this latest promo, etcetera, code is doing. You might wanna start with automation because, you know, the punishment fits the crime in that sense.
Em Daigle:
Good answer. Punishment fits the crime. Isaac, how about you? Any closing thoughts on on sort of the approach to that?
Isaac Heller:
Um, I would call Jason first if you have that question. But, look, if I again, it's it's a broad question. There's so many different scenarios. But if I really think about it, I would think automation first. Right? That's the truth. Right? I would think automation first. And I I think the the risk of error of just throwing AI at things is too high right now. Um, my one exception would be for these kind of PDF data entry flows. But, um, automation is kind of, like, taken taken a back step because AI has taken the headline, but it's still, I think, the default choice for a lot of accounting prop, uh, challenges today.
Em Daigle:
I love it. Well, this has been wonderful. Uh, gentlemen, I really appreciate your time. We are just about up on time. Um, want to offer up to the audience if you guys want to learn more with Hubify or Trillion. Um, you have these wonderful gentlemen, um, whose reputation precedes them to be able to, um, really help you understand, uh, where their solutions could help with some of your challenges. Um, thank you both for allowing me to join you and, uh, help guide the discussion without jumping in too much. It's hard for me sometimes to not give too much of my own opinion, but I appreciate you, uh, you having me. And, um, thank you all so much for joining. We will jump over, um, to the breakout rooms momentarily for anybody who would like to stick around and keep chatting.
Jason Berwanger:
Sounds great. And thank you. Thanks.
How finance and accounting teams should think about the company hierarchy of data needs from operational to compliance to financial in the age of AI and automation.
The difference between probabilistic and deterministic outcomes and when you should leverage AI, automation, both, or neither.
How accounting can position itself to be a strategic lever and increase efficiency through focusing on operational data.


.png)

