Voice of the customer is the process of collecting consumer feedback to improve a company’s products and customer experience. It also focuses on an individualized experience—collecting data on specific people, not just demographics.
In this talk, we discuss how to use custom machine learning models to analyze a 360° view of what customers are saying on social media, call/chat logs, reviews, and satisfaction surveys in order to programmatically update product descriptions, predict and prevent returns, recommend resolutions for customer service, and more.
Hey everyone, welcome to the webinar. Nice to have you here.Looking forward to talking about Jaxon, the application of machine learning to the voice of the customer segment, and digging deep. And hopefully we have a nice conversation here into the use cases thereof and how you can unlock the power of AI for your particular organizations.
With me, we have Jaxon’s Chief Product Officer, Robin Marian.
Good morning, everyone. Nice to meet you all.
And Jaxon’s head of customer success, Chris Deschenes.
I pronounced your last name wrong. How do you pronounce your last name?
Deschenes is pretty close, depending on—
I knew it, I knew it! Deschenes!
It depends how French you are.
Chris is new to the organization. We’re very excited to have him here. Thank you, Chris.
Glad to be here, Scott. I appreciate it.
All right, so you’re looking at my screen here. Let me start at the top here.
So Jaxon, in general, is an AI training platform where we’re taking raw data and the bulk of the use cases we’re gonna talk about, you can think of raw text, natural language, if you will, and turning that into training data, data that can be used to train custom models. Custom models that are designed specifically to solve use cases that are within your organizations that are nuanced to your organizations or perhaps are widespread applications like analyzing customer reviews, or customer service call logs, chat logs, survey responses, what have you.
So I am going to skip ahead to this slide here, and we’ll probably use this graphic to talk through a number of the techniques that you can use to accelerate the process of creating trained models to optimize human time, toward the end of just enough human supervision to get a trained model out the other side that’s as accurate as possible and fine-tuned to the problem specification defined by the users that again aligns to your nuanced use cases.
Alright. So with that as a tee-up, we really want to dig deeper into “voice of the customer” use cases. The first one that really comes to mind, and I know, Robin, you have a lot of experience with is understanding voice of the customer when it comes to the interface between customer support and users that are calling in with issues, wants, needs, and desires.
So how have you seen machine learning applied to contact centers and call logs?
So to begin with them, the call center and the customer support segment of the businesses and the industry right now is a growing industry. There is supposed to be close to 300 billion contacts or calls that’s gonna be done by customers to businesses to help solve the problems. Like, and there are multiple different types of issues that customers call in about, and each of them is unique to the businesses.
So, so as an example, if you’re calling to Apple for your product, there’s gonna be things that you would be calling them and saying about, I want to, you know, “I’m looking for an iPad”, or “I have an issue with my iPad”.
But let’s say if you’re calling to Amazon, you’ll be talking to them or, you know, asking them, you know, “What happened to my product? I bought this XYZ product.”
And so each use case is different, and so for these different use cases, what we’re really seeing is that, you know, personalization and understanding who the customer is, why they’re calling about, helping to direct that the customer call or to chat to the right agent is very critical.
So in a space like technology, like AI in the space of customer support, you know, just like how you said God, like, you know, Jaxon as a platform provides a way to personalize your AI model, to train them based on the data that you’re getting in.
And you know, that is very critical because this would be very interesting to you but—and—you and Chris—but did you guys know that—what happened in 2019, like as soon as the pandemic hit, what was the scale at which customers contacted businesses like travel industry, hotel industry.
That was about a 2X to 3X increase in the number of calls and chats and in different channels that customers was calling in, which is, if you look back and was like—yeah, there’s the people who have the reservations to want to go, you know, hotels, they had travel reservations, and now suddenly with everything being put on a hold, they all called in, and they were trying to get everything done by calling to the call center.
That’s very interesting. I hadn’t heard that stat. But now it gets me thinking about the call reps themselves. They were impacted by the pandemic. There was probably a shortage of the right amount of call reps, and how do you properly analyze their efficacy?
Chris, I know you have some firsthand experience with that right now, as you’re working with one of our customers that deals with this issue. Can you, can you not only explain a little bit more about what I mean, but also get into a little bit of how we can analyze the call reps themselves?
Sure, so we are working on a pilot with a customer that kind of does this on an outsource basis for large recognized brands.
And they are really concerned with efficiency of the that support mechanism and how good their call reps’ activities are with respect to what they’re being asked. So we want to be able to answer questions such as, “What was the nature of the call or the support request?” And this can be via audio or other channels.
So we need to kind of transcribe that, and then be able to classify that into a sort of understandable category that then allows them to direct that request to the appropriate place, as Robin alluded to.
And it gets—the next next question is, how effective was that interaction with the user? So was the caller directed to the next proper step in the in the chain, or did that caller have to iterate through to finally find where they needed to be? Did the caller provide appropriate guidance where it was their responsibility to actually provide that guidance, rather than routing? Can support be improved by having a tool that has been trained on these interactions to give them assistance as they’re dealing with customer issue or a customer problem almost real-time?
Those are the kinds of things that we’re looking at in that pilot.
Yeah, “AI assisted agents” is certainly a topic I wanted to bring up during this call. But before we segue into that, I wanted to get into some of the nuances of Jaxon, and how our approach to call analytics and—specifically, one area that I find pretty interesting, just to get a little technical for a second. is the fact that the GPUs that are powering the machine learning models are oftentimes taking in the first 512K, 756 characters to analyze a call transcript.
Yet, in order to keep the context of the entire call, and if you’re trying to look at the call reps and their efficacy there of, you really want to have the whole back-and-forth throughout the entire call. But if these GPUs are cutting off at the first, say, 756 characters, how do you keep that context? What are the attention mechanisms that you can bring into the model’s training so that you can cut up that text, keep the context, and then recombine it, so that you can properly label said call.
Yeah. And I think that is where the—the power of the NLP processing that we see in applications and in machine learning algorithms like the recently released BERT and different variations of it helps to organize the content, especially text, in a nice bite sized different manner so that, you know, the models can learn from these different tokens, and the pool relations [?] that they have between the two—between all of these things.
And just to give you a little bit—and just going back to that original thing—about the cause of the industry getting impacted by two weeks—and 3X volume—you know, what is—what would you say, like, you know, what the prediction is gonna be, let’s say six months a year down the line when everything opens up, what is really going to happen? I’m like, you’re gonna see the same kind of trend.
How open are we gonna be as a business to handle that kind of load or that kind of, you know, volume of customers reaching out to businesses, and how would you be able to—kind of, really recognize what are they calling about and help the agents so that, you know, they have less average handle time—they, you know, get to first call resolutions.
You know, that all, you know, is pointing towards what was once considered as a top trend in 2021-2022 in the concert [?] industry.
This is, that is AI, you know, artificial intelligence assisted agents, where AI is now gonna be—it has been, but now it’s gonna get more and more into the core of the workings to help customers and help the businesses to achieve the results that they’re really looking for, which is first call resolution, getting the answers resolved.
And what best to do that other than, you know—is using something like the newest version of AI and NLP processing—
Chris, in your example that you’ve been working with, you know, what, what are the trends are you [unintelligible]?
That’s a good, good, good question, Robin.
So what we’re seeing is that just the massive volume of data is very difficult to attack, particularly where you want to get a handle on how your support mechanism, how efficient it is, how you might want to increase efficiencies, and make sure that it’s acting properly in the way that you intended to.
But the challenges here are, “How do you derive insights out of this massive influx of data that is coming in on a constant basis?” There’s a tendency to say, okay, we can just throw bodies at this problem and have work at it from that perspective. But the scale that you’re alluding to really just kind of makes that an almost non-starter in many cases. And then you’ve also got issues with accuracy, you’ve got issues with folks that are getting smart on that labeling process that I’m sure we’ll get into more.
Whereas they’re just being asked to sort of mechanically label things as they come across, and that’s their task; in some cases that becomes very mechanical, and it’s not getting the kind of insight that you want out of human labelers. So it really does suggest that there needs to be some augmented way to provide accurate labeling at the scale that we’re talking about, in order for decision-makers to get the most power that they can out of what they’re getting from the voice of the customer, essentially.
Yeah, and I totally agree. I’m like, you know, the quality of the data that you use to, you know, make decisions within the company and, you know, business decisions, not only from a positive perspective but also technologies that are using call centers. Like, you know, one of the biggest trend that’s been happening for the last couple of years is bots.
And like, you know, many companies are using bots, and how would you train a bot so that they understand that the right sentences are the right questions to ask the customer? That comes with how well you train your bot, how well you train the AI models that are feeding these bots to kind of decide what next steps and what next questions to ask.
Right, absolutely, and as a user and somebody who has been on the receiving end of these bots, our patience and level of patience for wrong, or, sort of, not correct communications, with the bots is very low, so—spot on—
To—to the point—to the point where there are companies that use humans behind the scenes of the bots that are making the decisions for the bots to say next. I was—I was getting to this earlier today, about how chatbots are structured, where they have the natural language understanding, where they’re receiving the conversation piece from the user and then deciding which slot that fills to hit the dialogue tree to determine the pre-canned message to send back that, with all the you know, AI agents out there, there are still very few, if any, that are using natural language generation to actually have a conversation. They’re all just these big boilerplate dialogue trees.
So actually, Robin, I know that you have had a lot of experience with care. What are your feelings on the divide between making this completely automated, and having this human in the loop behind the scenes, and just having humans that do—the—the customers like Chris was mentioning, get too frustrated, too frustrated, too quickly with the bots. And where do you see them playing in, and how long will it be until we’re not even talking to a human when we call into customer support?
So that’s a very good question. I think that, you know, that we can talk about that for like, you know, hours, but let me give you a canned version of what I’m thinking.
You know, a few years back, let’s say 10 plus years back, and, you know, customers were not really very used to the concept of talking to an automated solution, a robot at the back that can—that’s trying to help you.
And—but—you know, in the current day and age, I have changed my perception because, technologies like Alexa, Google Assistant, Apple Siri, which are also bots, are now present in every day in many people’s house. They use that on a regular basis, and the technology that’s feeding these bots are getting better and better. I feel that translation of those technologies and learnings are gonna get into bots that are used for customer support and things like best practices are gonna be identified.
One that I can think about on top of my head that I don’t see a lot of bots doing is things like, you know, being very proactive about saying what are the things that they can help you with and what are the things that that it is absolutely not capable of helping and it’s going to transfer you over to an agent. Being that, you know, bringing that up and up foremost to a customer will be very helpful.
So as an example, Chris, as you said, you had bad experiences, but let’s say if I’m a bot, and I’m like, you know, I’m talking to say, “Hey Chris, you know what, I am a bot, I can help you with all the processing, checking off your, checking your order, and cancelation of your order, but if there’s anything else, I will transfer you to an agent. How does that sound?” Would that make you a little bit more open to engaging with me, as bot, to get your problems resolved.
Yes, I think I would. And to your point, I think, you know, this is still a very active area of research, conversational AI, and the best people and companies out there are pushing the limits on the stuff to get it better and better. But it’s still, I think, difficult to be indistinguishable between a person, right? So, do you know it’s a boat or not? And that’s an interesting thing to look at.
But over time, you know, I believe they’re going to get better and more attuned to what you’re asking, and, particularly, as I interact with a particular brand more and a particular product more, it may start to know about me a little bit more and give me more of that 360° view of what I might need help with, as we’re talking about me, that to me would be great.
I mean, I would find very useful—
I agree with you.
Maybe a little creepy, but I’m all right.
Ha ha. Just to make my prediction, I think that we only have about five years left for actual humans to answer a call on the customer support side. I’m gonna be so bold as to say that. But to pick up on the “bot or not” theme, I want to shift the conversation outside of contact centers and more toward what the voice of the customer is. Not actually audio voice, but what are they saying on social? What are they saying in review comments on retailer sites, in survey responses?
And, importantly, for these retailers or consumer packaged goods companies that care about this as well. How do you distinguish authenticity of those respondents? Because I found, in working with retailers and in other capacities, that there’s a lot of fraudulent activity out there, where competitors are intentionally writing bad reviews, or people are filling out surveys just to get the $10 they get for filling out the survey, and they’re not really quote unquote, authentic.
Any views on that, Robin?
So I think that’s a pretty interesting area. I think a few years back, Amazon and a couple of the other online companies went through the same exercise, where they tried writing the part [?]. What are the fake reviews that they get on products? And I felt—I believe it was Yelp where they went through the same process, where they were getting, you know, negative or positive feedbacks on restaurant reviews. And so they employed a lot of people, human, resources to kind of weed out these different responses or comments that they could see.
I think, in this day and age, realistically speaking, human resources being used—and identifying people who have the expertise to do this is going to be very difficult—and what we are really looking at is new research into a machine learning modeling techniques and algorithms that can identify the syntaxes of the contents that are being the text submission [?], right?
Unless you take the content, you identify what are these syntaxes? Is this something that is generated by a human, or is this something that the canned responses, or you know, the grammars just do not really match?
I think this is going to be a challenging front because things that companies like Grammarly [?] are doing a really amazing job in identifying syntaxes and helping you be much more better at, can create you at—passing your message over in written text now, so you’re kind of fighting two different technologies that are kind of emerging in a way to kind of making it lot more everything—as human-like and—but we’re not there yet.
So we can definitely use technologies, and we can train models that can identify and find these more—the nuances within the comments that can help us identify whether these comments are written by a human, or is this autogenerated, or is robotic in nature?
We actually have a question from the audience here on that subject of—it’s really difficult to just look at the natural language to identify authenticity.
What other data types can be brought in to be correlated and ultimately serve as a way of scoring authenticity and deciphering who is a human versus a bot versus someone that’s just trying to get their 10 bucks?
Any thoughts on—I’m leading the witness—multimodal modeling? Chris?
Multimodal will help. Multimodal, really quickly, is the ability to model not only not natural language but other data types along with it. So the biggest use case for that is imagery with text around describing that. That’s a major use case for that.
In the case of of bot detection, one could think of being able to analyze the text that is coming out of this thing around… maybe there’s a time series with when tweets are coming from this thing that one can correlate over time to say that, hey, you know, the these tweets are coming in batches at particular times of the day. It’s not a typical person’s behavior—it looks like it is mechanical. So you could potentially, you know, give them a higher score that they are potentially a machine-driven bot that is trying to mask, being a human.
But I think what gets interesting is we can create the bot; at the same time, we’re probably being asked to defend against the same bot. So we’re kind of in a virtuous cycle where we’re trying to create and defeat the same thing at the same time. So I guess, pick a side, and try to go deep on it, and then use whatever techniques you come up with to try to do the other side of it.
And I can, I’d like to add a little bit more to that—
Just—and this is not definitely, you know, we won’t be able to get to the depth of it—but there are certain other things that you can also use, you can use metadata that can help you identify maybe trends in access points. I think this is something that was used by a lot of security companies to identify DDOS attacks and things that network injections [?] that has happened.
I know this from my previous life, where a lot of these different types of metadata traffic, network traffic is used to identify, is it something that we should consider or put more focus in that specific area?
One of the things would also be time series, as Chris mentioned, right? And especially things like, the frequency at which these comments come in, do they come in and batches, do they come in in a certain time of the day? Because that might help us identify that these are something that’s happening at a specific time zone, right? And so there’s a lot more things depends upon the content, it depends upon what we’re talking about, where we’re talking about. Is it online e-commerce is talking about, something else—
Where it’s being posted from?
Posted from, yes.
Yep. Very good. Alright, we’re at the tail end here. We’re gonna be holding these monthly, so I hope everyone comes back. Does anyone have any additional questions from the audience? Anything at all? We can hold a few minutes of Q and A here. Absent of that, I guess I’ll pick up on one last topic. Any questions? No? Alright.
So, last topic, we will talk about… looking at our list page ranking comes to mind. Oh, this is a fascinating area, talk about voice of the customer. And you know, like you mentioned, Chris, where companies are able to personalize the experience a lot more now, I’m seeing an interesting trend around getting people to the sites. How do retailers attract you to their site to buy whatever, to get whatever good it is that you have the intention of buying, and they get all that from how you search.
So you go to do a search on Google or wherever you’re doing it and you get your page responses, their page rank matters. But I’ve heard that just having the retailer’s site show up in the page rank is not the best path because the the buyer isn’t at the level in their journey of making a decision to buy something yet, they’re exploring.
So, you know, all those top 10 pages? Of top, say you’re looking for a couch, top 10 couches to buy, and they have different themes within it, different descriptions of why you might want to buy this couch versus that couch.
Those are actually not very authentic themselves. They’re all driven by the retailers and it’s a trap to get you to their site, but they give you something to read that’s not a catalog. They give you a natural language generated description of the product that you would just search for, that will hook you into clicking on their site, and they’ll re-rank the top 10 based off of who you are doing the search.
So that, again, you’re trapped as quickly as possible. You don’t have to read the whole top 10 to click through to get to the retailer site. So, with only a couple of minutes to go, do either of you have any thoughts on that subject?
I had—this just reminded me of another issue that happened quite a few years back. But I think it was one of the travel companies that used to determine the ranking of the hotels, so, the flights that they showed to you depending upon the version of your operating system, or the type of the—
[Unintelligible] affair. And it was a big issue. And that did make—when I heard that, I’m like, you know—as a user, my thing was to test it out and see, is this really gonna work out?
So I had two different machines to different companies side by side, and we did exactly the same search, and yes, the ranking was—
—different. So, you know, just giving more fuel to what you’re saying—is that does really happen, and not only, you know, e commerce, I see this is happening in a lot of different companies and different scale, right?
You know, does it stop if they know that you’re coming in and you’re looking for some kind of a demo, and if they’re able to identify you as some—from some specific location, would customers or would businesses want to change the price range of the offering? And like, how far will they go, or how far can they try to kind of do this kind of segmentation? And you know, is there a way that we can avoid it or be aware of it?
Yeah, I think it’s getting down to the individual level, not just swaths of one product owner versus another, but down to the individual. Not only are they, you know, this, this and that, but there’s hundreds of attributes that these retailers are using to put the right promotion in front of you, put the right advertisement at the right time—the next best action, as it’s called.
Another thing I think is pretty fascinating is the idea of changing the user experience on their sites once you land there. So you go to retailerx.com, the way you are presented with—just even the layout might change based off of your own personal preferences, all under the guise of getting you to buy more stuff and convert carts. But personalizing the experience down to the search results level, I think it’s a pretty fascinating area. Most retailers aren’t at that level yet, just for clarity, but some are getting there.
So—and we cannot talk about search and personalization without talking about dynamic classes. Can we like—?
Oh, I love dynamic classes. Yeah, my favorite topic.
But I think we will be—are we out of time, or?
We’re out of time. We’re gonna end on dynamic classes, unless there is—oh, there actually is another question in the audience. Let’s see, what do we have here?
Any savings that your clients have seen by implementing our solution?
Alright, so back to Jaxon. So, we didn’t talk a lot about Jaxon. We got so excited about voice of the customer and we went on and on about that. But how does Jaxon impact it?
So if we go back to this graphic here, Jaxon is automating the process of creating training data for custom models. So say the model is identifying authenticity of a reviewer on a retailer site. The review comments is—become the input. So we’ll go grab 100,000 or 500,000 review comments, and we will label up just enough to seed the system, so that it has enough knowledge and representation of each of the possible classes. An easy one to go with is just “authentic or not”, so, binary classifier.
But enough labeled examples of inauthentic and enough authentic to seed the system to then be able to bootstrap the training pipeline to be able to autonomously label the data and train up a model that then can be used in production.
So as opposed to having armies of humans sit down and read hundreds of thousands of those examples, this is just reading a handful—rule of thumb—and the engineering team hates when I put numbers on these things because it is very much an “it depends” kind of answer—but rule of thumb, if you have just 20 to 100 labeled examples per class, you can automate this whole process.
So from an ROI perspective, it’s—actually have a slide on this and the deck, let me skip down to it. It is orders of magnitude faster. And so the faster you can get one of these—let me it out of this mode here for a second—the faster you can get to your model and production, the faster you can start making money off of it. So if this is a revenue generating a model that you’re building, speed really matters.
In this case here, the use case or the case study that I’m showing is on return comments for a retailer. And so with Jaxon, it was 100 times faster on the labeling, which is a big deal. Obviously, there’s cost savings that come with that.
I’m fascinated by the notion of—not only is Jaxon faster, but it’s actually more accurate. And we looked at this data set—this was a case study we did with a retailer—and there are certain classes within the labeling. In this case, there, it’s a lot more complicated than a binary classifier. This was a classifier that had 64 different classes to label.
And so there are some classes like, in this case, we’re talking about return comments. So they go back here like “The trash can I received has a large dent.” And that’s damage, and that’s obvious, and any human can recognize that, and lo and behold, the classes that are related to damage the humans got quite well—
I mean, they did quite well on them. They were fairly accurate. But there are certain classes within this one, the longer tail of classes that are harder for a non-subject-expert, subject matter expert to recognize like warranty related issues or loyalty program related issues, something very specific to the company.
And in those cases, the labelers were—believe it or not—in the twenties of accuracy, they got it wrong that many times. So the ground truth, or the training data that these models are using to understand how to properly classify the data, were flawed from the get-go.
So Jaxon not only is producing these models that, you know, days versus months, but also producing curated training sets that ultimately train these models better and have a higher accuracy on the other side.
Yeah, and that can add to that. I think one of the things that accuracy helps with is to reduce the number of experiments that you would want to have to do, from a team perspective.
So you have your teams do multiple trainings, and that takes, like, a long time, like, you have—it takes someone from, like, 22 weeks to build a model in over the past—the available data or something—to like six months or more to really get to the point where you are confident enough to say that “Okay, this is the model that I really want to use” and push into production because anything you push into production has to meet that high bar so that it’s not negatively affecting investment experience.
You know, Jaxon has a tool will also help to reduce that overall time frame to two multiple models in the magnitude of hundreds of more trainings and hundreds of experiments in the same period of time—where you might be able to do 10, 15 of them.
So that brings up another point and we do have one more question. So I guess we’re not wrapping up in a minute, but the next few minutes, we will wrap up.
But it brings me to thinking about the notion not just about accuracy and speed, but the fact that data science actually is science, it’s experimenting, it’s trying one path versus another and arriving at the most accurate goal that gets the model to where you want it to be.
But it’s never a one and done. There are auto ML solutions out there where you load in data, you get a model out the other side. It’s part of—if I were a scientist, and I play one on TV, if I were a scientist, I would try out auto ML first because it’s so easy. You don’t have to do anything, you just load data, and you see what you get on the other side.
But in our experience, those don’t usually cover what you need to cover. The experimentation that ensues from a workbench like Jaxon is really what’s needed. Where Jaxon comes in, just to talk about Jaxon for a second, is as opposed to experimenting by having data scientists that know how to code, right up custom models or custom code to drive models training, we have push button friendly interface that allows non coders to do the same.
Alright. To the last question, is Jaxon just about natural language, or is there more to it?
So natural language has been our focus out of the gate.
I mentioned some of the data types earlier, call transcripts, chat logs, review comments, survey responses, social media posts—
And taking that last one as as a great example, you have social media posts that have natural language in them, but they also have structured data or tabular data that coincides with the post, like number of likes, number of followers, if you can get it, which is really hard to find, sometimes like only 30% of profiles will show location, but if you have location, that might be impactful as well, the metadata, as you are referring to earlier, Robin, can be correlated against the natural language to provide a better classification.
So to answer this person’s question, thank you for asking it, natural language is supported, tabular data is currently supported, and a future release will support images as well.
So if you have a social media post with a meme in it, the meme alongside the natural language, and the post alongside the comments, and the natural language of the comments, and the number of likes, followers and all the other attributes can be correlated at the same time to do the proper classification.
Multimodal modeling is a central tenet to Jaxon. And, fortunately, one that Google has supported as well. Just this week, in fact, they announced an initiative, they call Mum, M-U-M, which is very much towards this end, and they’re gonna be using it in their search. Coming soon to Google. You believe—you’ll be able to not only write in a natural language query, but also upload a picture and have Google search for both.
So say you’re looking for a particular part. We were talking to a home improvement company recently where we talked about their search experience. I would love to be able to take a picture of the vacuum filter that I’m trying to replace, and put it in a search, find me a vacuum filter, and I can correlate across both of them and find me the filter without me having to sift through all the different pages that I usually get returns on when I’m looking for my filter, things like that. So I think the search experience is gonna go greatly up.
But back to Jaxon, we are very much along those lines of thinking is that it’s not just one data input that matters. It’s all of them together. Any comments on that before we wrap up here, Robin or Chris?
No, I said no comment. I was just looking at some of the questions that are coming into. I think it’s very, very interesting. People are really interested to know more about Jaxon, what what we can do, very excited about it. In case you’re interested to know more about the products, please feel free to reach out to us.
You know, we—this is our contact information and you can reach out to any of us, and we’d be more than happy to have a personalized call with, you understand what your requirements are, what you’re really looking for, and help you understand how Jaxon can help you solve those problems.
Very good. Well, thank you for everyone attending here. I hope you enjoyed the discussion. We will put this up on youtube and on our blog once it’s been processed, and, again, we’ll be doing more of these. So expect a new one next month.
Thank you, Chris. Thank you Robin. Very nice talking—
Thank you, Scott. Nice talking to you
Actually, really enjoyed this conversation, so, thank you.
I’m looking forward to the next one.
Looking forward to it.