← All events

How to be a master chef with AI

September 9, 3:06 PM
YouTube video player

Undeniably, the AI revolution is upon us and will continue to have a profound influence on localization for years to come. Proof of this paradigm shift is the quick advent and adoption of neural machine translation (NMT) due to superior quality, especially for more difficult and long-tail language pairs. However, NMT is just one of many tools in the AI arsenal. Within the broader spectrum of AI, there are many other machine learning (ML) algorithms that can dramatically improve all the steps in the globalization content life cycle, ranging from source content classification, target suitability, smart resourcing to targeted, AI-driven LQA. The talk will provide some practical recipes for using AI in business processes.

Transcription

Bryan Montpetit 00:08
We have Alex Chesky, who is going to be joining us. He's actually here. Welcome. Alex is actually an industry veteran, he's done. I think everything you can name in the industry, he's worked, for example, with content value chain. He's authoring content management. He's done engineering has been involved in machine translation. He's the man who gives me title nd myself, because he's got one of the coolest titles I've seen of recent times. So he's basically the senior manager, AI deployments that we localize and AI deployments. Before we get into your presentation, why don't you just tell us what that means? Exactly. And then I'll hand over the virtual mic to you, and we can get going with your presentation.

Alex Yanishevsky 00:49
Sure. Hi, everyone. Glad to see you, Brian. Thank you very much for the nice introduction. So basically, I'm, I'm in charge of gathering specs from clients and understanding what solutions they may have that are related to AI. And it's actually very, it's, it's on purpose. It's a very broad title, because it really can cover anything ranging from machine translation to natural language processing technologies, like for example, sentiment analysis, data, mining, etc. So it's, purposely it's purposely all encompassing, to to cover all of those.

Bryan Montpetit 01:28
Great, okay, well, then I'm pretty sure that the how to become a master chef with AI is going to be enthralling. So I'm just going to let you go at it. And I'll pop in towards the end of the presentation.

Alex Yanishevsky 01:39
Great. Great. Thank you. Let me go ahead and start sharing your screen.

Bryan Montpetit 01:42
Yep. Thank you very much.

Alex Yanishevsky 01:49
Yeah, so again, thank you. Thank you very much for having me. And I think and thank you very much everyone for joining and for your attention in in such a difficult time. Hopefully, this will be a nice presentation from which you could gather lots of useful information. I think it also the very well dovetails with smart cat itself, because as you can notice, even in the URL of smart cat, you see smart cat.ai, right, which, which certainly signifies artificial intelligence, or what some people call augmented intelligence. And what I'm going to be talking about is how you can use different machine learning algorithms in AI in a very practical business setting. So basically, the reason this is called How to be a master chef with AI, is, obviously we find ourselves because of COVID in a very difficult situation, financially, where in addition to already having very, very tight deadlines, and very tight content, now, more than ever more, we have to do more and more with less, right, that could be less resources, less time. So what I wanted to talk about, and the idea I called the reason I call this a master chef, is I wanted to kind of use the metaphor of cooking, right? So if you imagine, for example, something very simple, like you're, you're baking a cake, right, you're baking some dessert, you basically have ingredients, right, you're typically going to have flour, water, and eggs, right? And you're going to be depending on what it is that you want to produce, right? It could be a sponge cake, it could be an apple, Charlotte, it could be cookies, etc. But basically, depending on what you produce, you're going to mix and match these ingredients in different proportions. Right. And that's kind of the metaphor that I want to use here. So the ingredients that we have, that we're going to be used using is data, right? Data from the clients, and this can take form of translation memories, glossaries style guides, other information, such as is this a good translator? What is the profile of this translator, right? And we're going to be matching that with machine learning algorithms to get a prediction, right. And that prediction could be a translation like machine translation, it could be basically a prediction from a model that says, do this, or don't do this or do this with this degree of probability. Right. And what this will give us is business outcomes, right? That's ultimately what we're getting to and these business outcomes. Hopefully, I'll show you from some of these ideas. They allow us again, in this very, very tight financial situation where we have to do more with less, it would allow us to improve processes from a technology perspective. It would allow us to lower cost and ultimately to improve productivity. So before we get started, I wanted to kind of break down so we have a common definition of what is AI and what is And now and what is Nn? Because certainly even in the mass media, you see lots of these acronyms flying around. And often the mistake that they make in mass media and newspapers, for example, is they conflate all of these concepts together, and they use them as synonyms. Right? So they'll use AI and machine learning interchangeably. Strictly speaking there. They're not. Right. So that's why I wanted to kind of give you this, this broad view of how we look at it, right. So basically, AI, right, artificial intelligence is it involves machines that can perform tasks that are characteristic of human intelligence, right. So it's kind of this broad umbrella. Now, within this umbrella, we have machine learning, right? Which machine learning algorithms, which is the ability to learn without being explicitly programmed, right? By that we need to, we're not going to give machine a machine rules, we're going to give it patterns. And it's going to elicit it's going to identify those patterns, create a model from which it'll be able to generate responses, right, so that sits within the broader umbrella of AI. Now, within machine learning, and you've probably heard quite a bit about this is this concept of Neural Networks. And you've probably heard about NMT neural machine technology, which is state of the art. And that's what Google is using. That's what Amazon is using. That's what Microsoft is using. That's kind of the the latest trend in machine translation. Now, that's actually a subset of machine learning, right? That's one of the many machine learning algorithms there are, right? And if there are any AI researchers in this, you're probably going to kill me for this superficial explanation. But hopefully, it's kind of like a pop Popular Science superficial explanation, it will make sense to people who are interested in what neural networks are. So when we say that it's modeled loosely after the brain, what exactly does that mean? So if we try and experiment like this, if I show you what this is, probably most of you, within a matter of a few seconds are going to say, well, this is a dog leash, right? And if I asked you about what do you do with it, you would say, Well, I use it to walk a dog. Now, if we actually kind of unpack what happened in your brain, and you try to retrace your steps, as to why you gave the answer that you did, and you gave that answer instantaneously, right, you would probably say, Well, it's a leash, I know that Alicia is not used for fish, it's not used for birds. So it's beginning to further narrow down what this leash can be used for. Right? So it's being used for mammals, I know that it's probably going to be used for domesticated animals. So generally speaking, you're not going to walk around your street, walking a hippo, for example, or a rhinoceros, they probably won't take very well to that. Right. So now we're talking about domesticated animals. So you can see that you're filtering and filtering to get a broader context. Right? Now, if you think of your domesticated animals, you're typically not going to walk a cat on a leash, the casket, not going to be very happy. And you'll probably end up with a lot of scratches. So now you further streamline your possibility. So you could conceivably walk a ferret, right? Some people do that, that's very unusual, right? But most people are going to use it to walk a dog, right. So that's kind of how you backtrack into getting to what this is and how you use it. And again, in a very, very superficial way, this is kind of how neural networks work. So you have all of these features kind of the way I described things right now. And you have these features that get tagged with particular context, in particular words, and that's ultimately how you get to the most probable answer, right? Generally, it's the best answer, but it's the most probable answer. So now that we kind of have a broad sense of what AI is, and what how ml fits into that, let's talk about how we can use it in our globalization content supply chain. Right? So this is a very simplified content lifecycle, right? I understand that for many people, you're going to have a lot more steps, right, you're going to get a source that target before you deliver to the client. And even before it goes to L QA may go through a bunch of different iterations, right, you're probably going to have translation, you're gonna have editing, you're gonna have proofing, right you may have subject matter review. So this touchpoint on the target may happen may have four or five steps, right? I'm simplifying it just so we can talk about the vertical blocks, those are the most those are the things that I'm really concerned with. So basically, what we can do is we have a source before we send this source over to be translated. In other words, before we kick off the project, write in a translation management system. We can decide whether this source is suitable or not for our purposes, is it fit, fit, fit for purposes. By the same token, when we get a Target, right? It's been translated, it's been edited, it's been proof read, the subject matter expert has said it's wonderful. Now we're going to send it over to LTE way to do one final check. But before we even do that, there is a way for us to check whether this target is suitable whether it's fit, fit for purpose, right. And we're going to be talking about what it is that we can do in those two blocks and how we can use machine learning algorithms. And finally, when we're done right, let's say you sent us over to our AI driven l QA, right, we finished the project. And now we can again use these assets, these translation memories, for retraining the machine translation engine, if we're using that in our workflow. So my next slides are going to cover source suitability and target suitability. And I'm going to give you some examples of how you can model things out. And I'll talk about the business outcomes. So you can kind of see what's happening behind the scenes for someone who's a data scientist or computational linguist. At the same time, someone who is in management and in production can kind of see the real life value from from this. So when we talk about source suitability, right, what exactly do we mean and in kind of, in our in a previous webinar in a previous talk, there was allusion to controlling the simplified English, right? And that's basically what we're getting at here. Right? So there are a bunch of features that you can use to decide whether a source is suitable or not. Right. So for example, there's this concept and again, you have this, I'm focusing on English as the target. Because there are lots more of these tools available for English. However, the idea of these machine learning algorithms, and part of speech tagger is actually applies to pretty much anything that is a source language, it could just as well be a Romance language, a Germanic language, a Slavic language, an Asian language, like Japanese, Chinese, Korean. Right. So there's this idea of readability scoring and readability scoring is based on the idea of if I write something at what grade level is it written for? Right? So let's say I'm producing manuals for the airline industry, right for airline technicians, or for automotive technicians. Generally, those tend to have very simplified controlled English written typically no higher than an eighth grade level, right? At the same at the same time, if let's say I'm producing a white paper, right, that's basically semi academic, because I want to prove that my software or my approach has technical chops, right, it has certain merit, that's typically going to be written, at least at a very high high school level, like 11th 12th grade, if not at a college level, right. And in some cases, even at a graduate school level, if you're beginning to deal with lots of mathematical formulas, right. So this readability score allows us to see how this content was written. Right? And we'll talk in a second why we do that. We could also do things like a topic classification. I'll show you that on the next slide. So basically, if a client tells us this is marketing, we can actually see whether this is truly marketing or not, if we're seeing lots of words that are related to marketing, or are they perhaps related to legal, or perhaps something else? Right? We could also see, we could if you kind of go back to your grammar school days, right? And you think when you were learning the language, or if you're learning another language, that's not your native language, you generally inevitably get into the idea of grammar, right? Here's a sentence. And I have a subject, I have a verb, I have an object, I have an indirect object, I have a clause, etc. We can actually parse sentences like that. And the reason that's useful is because we have enough historical experience now. And I'm saying this as an industry to know that there are certain constructions that are problematic for human beings. And if they're problematic for human beings to decipher it, right? Like like noun phrases, if you're dealing with a rich morphologic language, like Russian, for example, right? That has six cases, and you have three nouns in a row, it's very difficult to figure out for human beings, what belongs to what and what should be in what case. Now, if you send that problem to a machine, a machine is going to have even more issues, right? So we can basically flag these issues ahead of time. Right? And then the last thing here is this idea what computational linguists call perplexity, it's basically think of it as on a very simplistic level stylistic matching, right? If I've written my content in one fashion, I can check whether my new content comes that comes in matches that or not. And if it doesn't, there are steps that we can take, right? And what are those steps? Why, why are we talking about this? And why why do we even want to do this? Right? So we have a few options we can we can not run the project until the sources improved, right? We can find all of these flags that say, Listen, this could be problematic. This could be a risk for LTA. Let's go back to our authors. Let's go back to our technical writers show them examples that are problematic I can have them fix the source, right? That's one way to do it. Another way to do it is to say, look, we have a machine translation engine. It's trained for marketing. But this particular text is legal, right? Or maybe we have an engine trained for medical, and you're getting it finance, right. So let's route the project to a more suitable machine translation engine get better output. And lastly, if we decide to go forward with the project, because deadlines are tight, the client needs this done no matter what we can alert all of the production people PMS, linguist, l QA, that there's a higher l QA risk, right? So we can ahead of time say, Hey, dear translator in file number four in sentence 11, that sentence is very long, it has some difficult constructions, be aware of it right? And then we can immediately tell that to the QA people and say, Listen, when you're going to do your QA, don't do it randomly focus in on these potential problems. Right? So just to show you kind of what that looks like. So this is a chart for topic classification. Right? So basically, the idea here is that all of the words that have the same colors belong to the same topic, right? So basically, everything that's orange is one topic. Everything that's green is another topic, everything that's red is another topic, right? So that allows us to identify what's the major topic and route this to the correct workflow, right, that example that I was giving, is this truly marketing, if not, if this is actually finance, let's route it to the workflow for finance. And let's read it to the machine translation engine for finance, right. And we're doing this again, because we want to optimize our process and reduce costs. Here's another example. So this has to do with sentence similarity, right? So the idea here is, on your Y axis, right here, you have the number of sentences, and on your x axis, you have the similarity. So this tells us that most of our sentences are not similar to each other. They're unique. They're different from each other. Now, if this graph was over here, where we had similarity, so think of this as 50% 60% 70% similarity, right? Kind of like your fuzzy matching, if you will. So if you have a lot of sentences here that match a lot, right, that would signal a different outcome. Now, is this in itself good or bad? Right? There's no right answer, it really depends. So imagine, we're writing a piece for marketing that we want for a trade show, you want your piece to be snazzy, you want it to capture the audience, you want lots of uniqueness, you want sentences to be different. By the same token, if we're writing a very straightforward technical manual, do this, now do this now do that, you're going to have lots of sentences that repeat and are going to have very, very similar building blocks, right? So this is just a methodology for us to see where we are at one or another side of the spectrum, right. And again, what this leads to is cost reduction, putting this to the correct workflow, so we're not backtracking in our steps. So let's talk about target suitability, right? Because we can do the same thing on the target side, right. So again, there are features that we can map out. So again, think of the idea of passive constructions, right? Someone was doing something rather than some something was being done to someone else. Again, translators find noun phrases, noun clusters problematic. subordinating conjunctions can be problematic, right? clauses can be problematic. So if you have a very, very long sentence, that's very difficult, both for a human being and even more so for a machine translation engine, right? So so typically, kind of what we have found, and this has been corroborated as well by by research, that kind of a good sentence, your sweet spot for sentences is somewhere between six words and about 25 words, right? Now, why do I say that? Because sentences under six words can be very ambiguous because they're so short, right? If you're translating UI strings, and you have something like file or edit, right, first of all, it can be two different parts of speech, right? It could be announced, it could be a verb, depending on what context it's in. Right? And it could be very ambiguous as to what it is doing in that context. By the same token, if you go to the other side, if you have sentences that are above 25 words, you are getting into 3035 word sentences, right? If you think of legal writing, that has lots of clauses and lots of semicolons, it's very difficult to unpack what belongs to what, right. So that's why typically, that sweet spot generally tends to be where people where people tend to do very well is between 66 and 25 words. The other thing we can do here, again, we can look at perplexity, which is basically this language modeling, right? So what you're doing is you're in the same way that we could do this for the source. We could do this for the target, right? And kind of the example that I typically give If people is think of something fairly simple like the word house, right? We can have a bunch of synonyms for it. It could be a house, it could be a condo, it could be an apartment, it could be a shack, it could be a hot, it could be a yurt, etc, right? So what we can do is we can vet our translation against this language model to see if we're following the style, right? So far, all of a sudden, your translator has a translation, like hut or shack, unless they're translating for a camping magazine, they're probably doing something wrong, right? If the vast majority of the time you see an abode represented as a house, home, a condo and apartment, right? So that's kind of one way that we can test the style. Again, why are we doing all of this craziness? Right? We're doing this because we can go back to the linguists for more editing, even before we send this to LTE, right. So rather than sending this to our QA, we can say to the linguists Listen, we found a bunch of inconsistencies, because you're not matching the style that the client typically uses. There are lots of very, very difficult constructions. Please fix this, before we send this to our QA, right. In some cases, if the deadline is very tight, we can go ahead and send the text over to our QA, but then, rather than l QA, doing a random test of let's say, 10%, we can actually point out to them specific files and specific sentences that they could focus in. Right, so now we have a more AI driven a more focused l QA that has more meaning. And then lastly, we can capture the information of what gives translators and l QA problems, and use that information for retraining and machine translation engine, right. So what I mean by that is, if we see that a sentence is very long, and it took someone lots and lots of trouble to post, edit the sentence, we may want to consider whether we actually want to condone such sentences, and use that as our data to train the machine translation engine, right, we may want to go back to the technical writers and say, Listen, this is going to continue causing problems, you need to write in a more simple fashion. So just a couple of examples of how something like this works. And I know this looks very scary, but I'll walk you through it. And again, don't, don't worry, don't worry about it looking scary. The idea here is we can actually plot every single sentence in a 3d diagram with the features of the sentence. So what the sentences is telling us is, you can see that the sentence is kind of way outside of the norm, right? It's not in our sweet spot over here. What this sentence is saying is, I have 66 characters that's plotted on the x axis, I have zero verbs that's plotted on the y axis. And I have eight nouns that's plotted on the z axis. Right? Now, again, if you go back to this question of Is this good or bad? The answer is going to be it depends, right? What this is telling us is, our sweet spot of style is right over here. This sentence is way outside of our sweet spot, probably because it has a lot of nouns, right? It has a list of eight nouns in a row. Now, if this were a legal text, this would probably be fine. Because you're going to be a numerating a bunch of different things, you're going to have a list. If let's say this is a technical spec for a laptop, right? My Computer, my laptop has this much RAM, this much hard drive space, right? This is the resolution on it, this is the screen size, it's okay that you're gonna have eight rounds in a row, right? If you're dealing with a marketing piece, this is not okay. Right? Because this is going to be, you know, quite dry and quite boring. Right? So it depends. This allows us to very quickly see what are the outliers and focusing on those outliers, right. So if this is clearly an outlier here, I could flag this to the translator, I could flag this sentence to QA and say, Look, focus on this, fix this, don't fix these, these are all perfectly fine. Right? And again, the reason we're doing this is we're striving for process optimization and identifying likely errors. So here's another example here. Again, please don't be daunted by this. I'll explain this. So this kind of goes back to ID or idea of grammar trees. So what we can do is, we can tag every single word in a sentence, and we can see how often it occurs. So let's let's talk about adjectives. We were talking about nouns before, let's talk about adjectives. Right? So you can see that in this piece of text, we have over 200 sentences with zero adjectives. By the same token, we've got about 25 sentences with three adjectives in them. Now, is this good or bad? Like if we're writing a marketing piece, perhaps this is not so good, because we don't have enough adjectives. This piece is not flowery enough. If we're writing a technical manual, this is actually pretty good because we're limiting the number of our adjectives, right, we're typically going to have more of a, this points to a more controlled English, right? And we can do this for every single part of speech, you can see pronouns, here, adverbs, prepositions, we could do this with nouns, verbs, etc. Right? And again, this lets us very, very quickly map out whether this subscribes to the style that we have that we've agreed to with the client, right. And again, they always think about the business outcome for this is we want to identify errors, and we want to fix them before we deliver. And ideally, even before we get to that point, we want to identify whether this is fit for purpose, right? So we could optimize our process. So last example, where we use machine learning algorithms. So this is this, this is a concept called the random tree forest. So basically, I call this slide you know why, why you shouldn't hire Alex as a linguist on Wednesday nights. So the idea is, I like to play squash on Wednesday night, right? I play in a league, and that runs from September to April. So basically, imagine that I'm your linguist, you're asking me to translate, and you're giving me a project on Tuesday, you want me to deliver it on Thursday, right? Now, you could glean from my profile after a while, right? That it's not good to give me a job on Wednesday, that runs that spans a Wednesday night, and I have to deliver on Thursday. Why, if you know that, I play squash, right? If let's say I come back, when my match, I'm very happy, I'm going to go have a few drinks with my friends, I'm not going to be particularly interested in doing your translation. By the same token, if you see this cartoon over here, if I lose my squash game, I'm not going to be particularly happy either. So I'm not really going to be focused in your translation. So bottom line, don't give people work on Wednesday, I'm not gonna do a good job. Right? Now. We can actually, you know, aside from the anecdote, we can actually model that out. Right? So basically, we can take some features like, what is the deadline? What tool Am I using? What's my domain knowledge? How difficult is the source? What's my previous pass fail percentage, right? We could use all of those features to map whether what is the probability that I'm going to do well on this project or not? Right. And then you can decide as a project manager, or as a vendor manager, whether you should assign that project to me or not. Right? So here's an example for you. We're, we've mapped this out, right? And basically, we're saying, Well, the most important thing here is the domain, right? If I'm a specialist in finance, you're going to give me a legal project. It doesn't matter whether you give it to me, spanning a Wednesday night or not, I'm not going to do well in it, because that's not my specialty. Right. So let's say I passed that test. Right, then our next feature is the deadline. Right? And this is kind of what I was talking about, if you know, and you could see that this is the second most important feature here, it has the second most weight, right? If you see this, and you see that it's a it's a project spanning a Wednesday night, right? I should probably be one of your last choices only if you're very, very desperate. And again, we could map this out these different features for what is the source? What's my pass fail percentage and elke Wait, What tool do I use, etc, right? So this just kind of gives you an example of how you could model these features out for translators, and really use AI to make a good choice when you're choosing people. So a conclusion, and then I'll stop for any questions that you may have. So just to kind of share what's happening on our side. So this L QA process that I was showing you, we're actually in the midst of doing it, we are projected to increase our productivity by 10, or 15%, as a result of this targeted l QA, and we're projecting to save about $30,000 annually as a result of this process. We're also in the process of talking to a client about source suitability, and how we could run this for them. And when we did our initial scoping, we're expecting to save about five to 10%, and localization costs for them as a result of kind of flagging these issues ahead of time. So let me stop here. Hopefully, you enjoyed the presentation. And I look forward to your questions.

Bryan Montpetit 29:21
Cuz that was, that was great. I think there was a whole bunch of information in there that everybody's kind of taking notes on, because there was so much information. It was just Yeah, it was great. We have a couple of questions that come in, and perhaps, I mean, obviously, you have a lot of resources, given the company you're working with at the moment. Perhaps you can kind of break it down for some of the smaller players and how they can start getting into you know, really getting into the, let's say IRM T or what have you and really start benefiting from the the technology we're leveraging from the technology. Do you have any advice for the small players?

Alex Yanishevsky 29:58
Yeah, yeah. Cool. So I mean, it is true. I mean, in our case, we're, we're a fairly large company with almost 2000 employees. And we have a department dedicated specifically to AI tasks. Having said that, I think if you're a smaller company, the places to look, lots of lots of machine learning algorithms, that data scientist and computer linguist computation that was used tend to be written in Python. Right? So what you're basically looking for is someone on your side, who is one of your most seasoned technical resources, right? Who either knows Python, or can start looking into open source projects in Python. Right. So that's a great place to get started, either look for a resource that has Python look for someone who is who is interested in natural language processing, and have them start looking at things like, for example, sentiment analysis, data mining, that's a nice way to get started. Tensor flow from from Google is an open source framework that you could look at to

Bryan Montpetit 31:04
excellent. So I mean, obviously, to to get started in this, okay, there's a bit of knowledge that's required or a bit of learning that's required. But the the cost aspect to getting started is relatively low. From what I understand.

Alex Yanishevsky 31:16
Yes, the cost aspect is, I mean, really, the barrier to entry is zero. It's really just having interest and, you know, some technical inclinations, right. So in our case, for example, we've had folks who were traditional localization engineers, who we've kind of with some training of their own and training by our data scientists, we've kind of repurposed them into machine translation, machine translation and machine learning engineers, but there's so many free resources like, like edX, like Coursera, medium is a good place just to read blogs about data science.

Bryan Montpetit 31:52
Excellent. Again, I think that we've had a lot of questions come in. And I think we're probably going to have a lot of people contacting you afterwards. In terms of more more information. I think one. One, cost, which, as you mentioned, the barrier of entry is quite low. We've talked about the the, I guess, the skills or requirements in terms of of know how So Python was the, I guess, the big aspect there? Are there other, I guess, source that we should be looking at, in order to educate ourselves about this, just in general,

Alex Yanishevsky 32:27
I would say So Python, having the ability to consume API's, right, knowing how to consume API's, I would look, so a great place to start is, Google has a suite for natural language processing called Auto ml, right? Microsoft has a suite of products as well as Amazon, Amazon's is called comprehend, you can actually sign up for all of those for free. So I think it's a nice place to get started. And so it's a great place to get your feet wet, right? Learn how to consume these API's, learn how they work. And then the next step would be kind of Phase two would be alright, once you understand what they do, you could start looking into building your own models and customizing them. That's where things get a bit more difficult. But, but if you want to just start out, I mean, all of those are great resources. And they all give you a trial, just to get a sense of how all this works.

Bryan Montpetit 33:23
Perfect. With that said, what I would I like to ask you to do is perhaps after your presentation, if you could just put those URLs or point people to those resources, I think it would be appreciated by everybody. That's that's, you know, watch the presentation, which were, by the way, just shy of 500, just for the record. So people if you have enjoyed this, and you really want to share the information or talk to Alex, we've got social media channels, we've got Slack channels that you can use in order to communicate. And I think just Just one last question, the 3d model that you were showing, what is that called?

Alex Yanishevsky 33:59
It's basically so I mean, that's actually what it is. It's it's a it's a 3d model visualization. That was done. So we have a Jupyter Notebook for it. And so basically, it was done through a Python add in, called NumPy. And NumPy. Matt MATLAB, yeah. And MATLAB were the two.

Bryan Montpetit 34:27
Awesome, okay, because that was another one that came in and people just wanted to know what whatever modeling it was. So I appreciate that. Did you have any final comments that you wanted to share with everybody? Words of wisdom, anything of that nature?

Alex Yanishevsky 34:41
Well, I guess, I wouldn't say it was it was good, but I would just say, I think, guys like in in our industry. I think this is a very, very hot topic. And I think it's going to be the future of our industry, period, right. Whether you know whether you're like smart At its very forward looking and using AI, in your workflow to actually build your platform, right, whether you're going to be doing data extraction to build glossaries, or you're doing sentiment analysis, I think this is really the future of our industry. So I think it kind of behooves all of us to, you know, start becoming more more accustomed to it.

Bryan Montpetit 35:21
100% a great, thank you. Thank you so much for such insight. And I think the the amount of information again, was was phenomenal. I really appreciate it. Super interesting. Thanks for the time today.

Alex Yanishevsky 35:33
Thank you very much. Thank you guys.

Bryan Montpetit 35:35
All right. Thanks.

Discover why 25% of the Fortune 500 choose Smartcat