Are generative AI technologies credit union innovation partners or industry intruders?
Taylor Nelms: My knee jerk response is this: large language models are very powerful, but technology predictions in a hyped-up environment are also very, very hard. Whatever is happening, credit unions can't buy into either the breathless hype or the extreme fear mongering. Most predictions, and especially predictions made with “real” certainty, are going to be incorrect. Moreover, when things are truly unsettled and emerging, as they are now, we shouldn’t simply try to predict the future. We should be actively helping to shape the future. My number one piece of advice to credit unions on this is that they should not shy away from those conversations but embrace the opportunity. Don’t ask, “What will the impact of ChatGPT be on credit unions?” Instead ask, “What do we want that impact to be?”
Caroline Vahrenkamp: Absolutely. Let's think back to the 1970s when ATMs first arrived on the scene, which led to prognostications that this would be the end of humans in banking. Still today, there are articles saying that the branch is dead. What's happened? Credit unions have grown their branch network collectively as an industry pretty consistently over the last 20 years. Ultimately, any technology is a tool. That's all AI is, it's an additional tool that you can use.
TN: The introduction of these powerful generative AI technologies has suddenly made people more open to the idea that the future can change. A lot of people are thinking about how AI can help cut costs. But if lower costs is your only measure of success, there will be unintended consequences. So the most important question that credit union leaders have to ask themselves is: What do we want to be optimizing for? How are we going to measure success?
CV: You can't cut your way to high performance. If you go into this thinking, I'm going to use this as a tool to help improve my member and employee experience, I think you will have far more success than if you say this is a tool to help us cut costs, even though it may also help your operations become more efficient and your employees more productive, too.
Let’s talk the future of talent. AI is about to join the employee team—what is that going to look like, and what should the industry be thinking about?
TN: I read an interesting blogpost from Sarah Hinkfuss at Bain Capital Ventures, who talked about the differences between “generative AI as a service” and “generative AI as a component.” As an industry, we've started to shift away from the assumption that AI will do all the work and towards having AI be a support, an assistant, a copilot.
CV: It doesn't have true problem-solving capabilities. There are things that AI can do very well, and still you will want someone to look at the answer to make sure it’s accurate, appropriate, and consistent with your brand.
TN: There's already emerging research from outside of financial services that shows that access to ChatGPT and similar technologies boosts productivity for some workers, in part because it can be good at repetitive tasks and simplifying complexity. Used with the right guardrails, we can absolutely see a world where it can empower employees and spark creativity rather than stifle it.
CV: The use of AI and ChatGPT is going to have to become decentralized. You have to have people across the board being able to use it, from business intelligence to marketing and lending and finance and operations.
TN: And what that does is put pressure on role definition and the kinds of skills workers need. There is an emerging skillset around using generative AI, and there are folks right now who are experimenting and getting good being able to design the right prompts, engage in conversations that produce high quality work, and avoiding cliches and hallucinations.
CV: If you don't know how to ask the right query, you can have the best data warehouse and the best CRM system in the world, but you won't get the results you need if you don't know how to ask for it correctly.
TN: And there is a tradeoff here. We’ve seen many technologies designed to increase efficiency and productivity, but when they are implemented at scale and everyone is expected to use them, it generates new expectations and additional workload for employees. As we think about the adoption of things like generative AI internally by staff, we need to be thinking about employee experience and engagement too.
For example, there’s a real question about fairness when it comes to using generative AI. We've seen it become a sticking point in labor negotiations like those around the Writers Guild and SAG-AFTRA strikes. Part of the question is around eliminating jobs or paying people less. But another issue is, where does the data come from that goes into training these models? For credit unions, as you think about fine tuning your models and generating positive service experiences, think about where that data comes from and how it’s been trained. Are you basing your models on past service interactions with some of your best employees?
CV: I think we’re going to see more legal questions come out of this, especially around sourcing. Can ChatGPT effectively cite the sources that it's getting if it's being trained on the Internet? What does this mean for copyright infringement and plagiarism?
TN: Absolutely. I have a rule of thumb from the anthropologist Nick Seaver: “If you don't see a human, you're not looking closely enough.” For credit unions and all potential users of generative AI: Be intentional about looking for where the humans already are. Think about what their roles and responsibilities should be, and ensure they're treated well. Again, how are you measuring success?
Let’s talk about security risks and fraud, and what we can do about it.
CV: Fraud is perhaps the most important topic we should we talking about. Fraud detection is always an arms race. AI can now make phishing and other kinds of scams so much more realistic.
TN: It pushes conversations about fraud to a new level of significance. I think it's going to be incredibly challenging, if not impossible, or just inordinately expensive, for any individual institution to be able to do it well or at all. The only way to tackle it is systemically, and that means thinking about what are shared solutions.
CV: I'll even go further and say this needs to be something that needs to be embraced by all financial institutions. This is another step in the technological development staircase, just like access to the Internet or to email. This is another step but one during which, for the next two or three years, the fraudsters are going to be ahead of the rest of us.
TN: That’s a fair expectation. How do we equip financial services providers to be good stewards of members’ data as well as their money? We are entering an era where we need to embrace that opportunity for collaboration. And there are other risks too—one being accuracy and bias. We talk about hallucination as a bug of some generative AI solutions, but in a way, because of how the technology works through probabilistic prediction, every output is a kind of hallucination. That manifests in a number of ways, not just through clear falsehoods but by repeating widespread misconceptions, myths, or jokes—for example, asking “what color makes bulls angry” and getting the answer “red.” Not true, but widely believed.
CV: Inaccuracy is a brand risk. AI that generates wrong or biased responses in some way can lead to issues for your reputation and trust in your organization.
TN: And if you're limiting your generative AI to the point that you’re putting up very strict guardrails around what it can or cannot say, do you even need a generative AI system?
CV: It’s about finding the appropriate use cases. As it stands now, for analysis, I think there's a lot of interesting use for generative AI in understanding and extracting information from a large variety of sources. But I do worry about using it creatively, and especially it being used creatively in the hands of people who want to do you ill.
Final question – what are the risks and benefits of uniting AI technology with the human-centric, community ethos that has differentiated credit unions?
CV: I think it depends largely on the competitive marketplace in which credit unions are operating. If you're using it in member-facing capacities as a kind of chatbot or to develop marketing messages without human oversight, it might seem distant to members. It could have a real negative impact on the brand, especially if you're not checking for accuracy. But if you’re using it for analysis, synthesis, and as a tool for initial drafts, I don’t think there will be as much pushback.
TN: I think this is a question of trust, and it’s very central. How can you leverage generative AI to build and maintain trust with the community that you serve? Consumer trust ought to be one of those metrics you prioritize! Generative AI could be part of the process for credit unions to meet members’ specific financial needs, support their financial wellbeing, and expand access to safe and affordable financial services. But like you said from the very beginning, Caroline, its rightful place is as a tool, not a panacea.
Want to connect and learn more about ChatGPT and Generative AI? Check out Filene’s recent webinar So You Want to Talk About…TextGen AI for CUs! Also, be sure to subscribe to Filene's new LinkedIn Thought Leadership Newsletter - Thinking Forward!