Tutorial Transcripts

Specification

Show Transcript

Specification

As the initial starting point for any machine learning project, the Specification or Spec articulates what your problem is, why it needs to be solved, and what solutions need to be implemented in order to solve it. The Spec also denotes the intended Classes and, if necessary, Subclasses. On the Spec tab, you can select for multi-label classification, manually and automatically populate the Spec, adjust the Beta value, and lock the Spec. Classes must be defined in the Specification tab before any dataset(s) can be used in the rest of the Jaxon Platform.

Selecting for Multi-Label Classification

Jaxon defaults to Single-Label Classification but can easily shift to Multi-Label Classification via this dropdown. It is important to make sure that Multi-Label Classification is selected before locking the Spec for best results.

Manually Defining Classes

Classes can be added manually or automatically populated from a dataset already in the Jaxon Platform. To manually define Classes, select the text box, type in a Class, and click the check mark or press the return key. It is also possible to add descriptions to classes. Click the trash can to delete a Class. Use the Clear all Classes  button with care, it does just as it says!

Creating Groups

Groups can be created in order to organize and group Subclasses, typically when there are an unwieldy number of Classes. These Groups are for visual convenience only and allow users to collapse Classes, but Groups do not correspond to any particular Class themselves.

To create Groups, type the Group name in the text box and then click the arrow to see text boxes for that Group’s Subclasses. From there, add Subclasses as you would Classes.

Automatically Populating Classes from a Dataset

In order to automatically populate classes from a dataset already in the Jaxon Platform, the dataset must: already be imported into the Jaxon Platform, Have at least one Features column defined, and Have at least one Labels column defined and labeled with the target Classes.

If all of these conditions are met, select the dataset you wish to import the labels from in the Import From Dataset dropdown. Any manually-added classes must be cleared before this option is available. Jaxon will then import the classes and the Spec will save automatically. Adjustments to the imported Spec can be made manually as desired.

Adjusting the Beta-Value

The F1 score, corresponding to Beta = 1, is used to strike a balance between precision and recall. For some cases, rather than optimizing overall accuracy, it is more important to minimize false positives (beta < 1) or minimize false negatives (beta > 1).

The option to adjust the Beta value is to the right of the screen, here. 0.5 represents precision, 2 emphasizes recall, and 1 is an even balance.

Locking the Spec

Once the Spec is satisfactory, it must be locked in order to move through the rest of the Jaxon Platform. Once it has been locked, the rest of the tabs will become available. Locking the Specification cannot be undone – don’t forget to double check before proceeding! Once a Spec has been locked, the only way to change it is to create a new project.

To lock the Spec, click the Lock button to the top left of the screen. Select OK in the dialogue box that pops up if you are certain. A locked Spec is indicated by the closed padlock symbol on the Projects tab.

Datasets

Show Transcript

Datasets

In the Datasets tab, you can import data, apply a schema to it, merge two datasets, split a dataset into test and train, augment data, export data, and delete datasets.

Importing a Dataset

To import a dataset, click the plus sign from the Dataset Menu. Fill out the intake form and select Submit. Next, specify the formatting characteristics of the dataset file in this box to the left of the dataset preview and click Import.

Once the data is imported, specify the columns within the dataset that will be used by Jaxon as Features and Labels. Specifying at least one Features column is required to be able to use the dataset.

Copying a Dataset

To copy a dataset, select the one you want to work with and click the Copy Dataset icon. Fill out the pop up that appears and select Copy. This will create an exact duplicate of the original dataset.

Merging Datasets

Any two datasets can be merged to provide a combined set. To merge datasets, select one of the datasets you’d like to merge. Click the Merge Datasets icon, fill out the pop up that appears, select the dataset to merge to, and select Merge. 

The merge function will merge columns with the same header into a single, combined column. Columns with non-matching names will stay as they are. Note that If your data has any labels, make sure the specification has been assigned and locked before merging datasets for best results.

Any available dataset in the Datasets tab can be split into two smaller sets. The split ratio for both labeled and unlabeled rows is independently controlled. Splitting a dataset will create two new datasets while also preserving the original.

Splitting Datasets

To split a dataset, select a dataset to work with. Select the Split Dataset icon and fill out the pop up that appears. 

The SmartSplit feature here is automatically enabled and is a proprietary method of splitting a dataset in a way that avoids covariate drift and other latent differences between those datasets in a way that improves significantly upon random sampling, which is the traditional method.

The flatten option here removes examples from over-represented classes and flattens the distribution, but does not split a dataset in two. Rather, it creates a flattened version of the selected dataset while preserving the original. 

Once the form is completely filled in, select Submit.

Augmenting Data

Augmenting a dataset adds both labeled and unlabeled examples to an existing dataset. To augment a dataset, select the dataset to augment and click the Flask button. Fill out the pop-up that appears, select your number of augmentations, and click the Augment button. A new dataset will be created while also preserving the original.

Exporting Datasets

Datasets can be exported as CSV or Parquet files. To export a dataset, select the Export Dataset icon. Select the file type you’d like to export to begin the download.

Note that only columns that have been assigned Features and Labels designations will be exported.

Deleting a Dataset

To delete a dataset, select a dataset to work with and select the Delete Dataset icon. Click OK on the box that pops up if you really want to delete the dataset. Please note that this action cannot be undone.

Neural

Show Transcript

Neural

The Neural tab allows users to train neural models and view model, training, and accuracy statistics for projects and provided datasets using deep learning techniques. Multiple models may be trained for one or more available datasets.

Training a New Neural Model

To train a new neural model, from the initial screen, select the plus button. Fill out the intake form and select Add Training Stage. The required fields are the Name, the train and test datasets, and the text and or tabular representations to use. Repeat until the desired Training Schedule has been built and select Submit. With a multi-GPU setup, you will also need to select whether you want the model training job to be scheduled as Standard Priority or Express Priority.

Within the neural training schedule, many parameters can be adjusted. Learning rate, decay, and dropout are all adjustable, there is the option to use a Gold set for calibration in addition to the Test set, and certain representations allow for pretraining, transfer learning, and or unsupervised data augmentation, among others.

Once a model has been trained, Jaxon provides valuable information about model training and accuracy. The bar graph indicates each Training Stage completed and the F-score for each stage. 

The information and table to the right of the bar chart also provide a high-level overview of the training stage and are indicative of overall model performance with cumulative totals for all available Training Stages.

Click on the pie icon to display a Loss Chart for that stage.

The overview at the bottom provides a Confusion Matrix that maps out the predicted vs actual accuracy of the classifier for the given examples. The darker the squares are along the diagonal, the better the model performance.

Hovering over one of the cells within the confusion matrix will bring up the number of examples that were expected to fall into that cell vs the number of examples that actually fell within that cell.

Clicking one of the cells within the confusion matrix will bring up actual examples that fell into that cell. This can be used for further calibration to see why certain mistakes happened and strategize how to resolve said mistakes.

Importing a Neural Model

Users also have the option to Import a model created by and exported from Jaxon to allow models to move between projects or Jaxon servers. The primary intended use case for this is to spend much time pretraining a model on lots of company data, then be able to reuse it in other projects.

To import a Neural Model, select Import. Choose the model you’d like to import and select Submit.

Copying a Neural Model

Copying a neural model is similar to creating a new model. Using the same intake form, users can curate and fine-tune existing models for a given dataset.

To copy a neural model, select the model you wish to copy and then select the Copy Model icon. Fill out the intake form and select Submit. When copying a model, the Tabular or Text Representations cannot be changed from those of the original neural model.

Exporting a Neural Model

To export a neural model, select it and click Export. The file will download automatically and a pop up box similar to this one will appear with instructions on how to deploy the model.

Deleting a Neural Model

To delete a neural model, select it and click the delete model icon. Click OK on the box that pops up if you really want to delete the neural model and all dependent ensembles. This action cannot be undone.

Ensembles

Show Transcript

Ensembles

Ensembles allow the user to combine models and heuristics into a single ensemble that returns a single output. Ensembles can contain any combination of heuristics, classical models, and neural models.

Creating a New Ensemble

To create a new ensemble, select the plus from the Ensembles Menu. Fill out the ensemble training form and click Submit. The name, at least one model or heuristic, and the train and test sets are required fields.

When an ensemble has been selected, Jaxon provides valuable information about ensemble training and accuracy. Similar to the neural tab, a table is included that contains the ensemble’s metrics, as well as a confusion matrix that maps out the predicted vs actual accuracy of the ensemble for the given examples. The darker the squares are along the diagonal, the better the ensemble performance.

Hovering over one of the cells within the confusion matrix will bring up the number of examples that were expected to fall into that cell vs the number of examples that actually fell within that cell.

Clicking one of the cells within the confusion matrix will bring up actual examples that fell into that cell. This can be used for further calibration to see why certain mistakes happened and strategize how to resolve said mistakes.

Synthetically Labeling Data Using an Ensemble

To synthetically label a dataset using the ensemble, select the dataset using the dropdown provided, modify other fields in the label dataset box as desired, and select Label.

Once a dataset has been labeled by a Jaxon ensemble, the newly-labeled dataset will be available in the Datasets tab.

Deleting an Ensemble

To delete an ensemble, Select the ensemble and then the Delete Ensemble icon. There is no confirmation, and this action cannot be undone.

The Flask

Show Transcript

The Flask

The Flask button helps you augment your existing data with synthetic examples. 

Selecting a Dataset

Before using the Flask, you will need to split your dataset into training and validation sets. You can learn more about splitting datasets in the Datasets tutorial. 

To augment your dataset, select it in the Datasets tab. As a general rule, only training data should be augmented. Model testing should be done exclusively on real data, as testing on augmented examples may cause the model to appear more accurate than it really is.

Once your training data has been selected, click the flask icon to open the data augmentation form. 

Give the augmented dataset a name, and choose how many augmented data points to create for every example. Usually, 1-3 augmentations is enough. Using too many can introduce unwanted noise. Higher augmentations are recommended for flattened datasets in order to populate sparser classes.

If you want to discard the original examples from this set, check this box. The new dataset will contain only augmentations. 

Freeform/Text Augmentations

Freeform augmentations are used to augment text examples. 

    • Random Swap replaces a word from the example with another word selected at random.
    • Synonym Replacement replaces words in the original example with synonyms.
    • TF-IDF Replacement (term frequency–inverse document frequency) replaces a percentage of words in the example based on how frequently they appear in the dataset. 
    • Check this box to enable LM (Language Model) Text Generation. We recommend using the language model for datasets that contain larger amounts of prose, such as full paragraphs. 

Tabular Augmentations

Tabular augmentations are used to augment other types of data, such as categorical or numerical. 

    • Category Swap changes the categorical value to another used in the dataset
    • Gaussian Noise changes the numerical values, assuming that the frequency of existing values falls along a standard bell curve distribution.
      • Gaussian Standard Deviation (STDEV) sets the acceptable distance (standard deviation) from the original value. The higher the standard deviation, the greater the difference between the original and generated values.
    • VAE (Variational Autoencoder) compresses existing data and expands it again using AI. The reconstruction is purposefully noisy, creating variation.

Once the settings have been adjusted to your liking, select submit. Jaxon will preserve the original dataset and create a new one that includes the augmented examples. 

You can find the new dataset under the datasets tab once the augmentation has been completed. 

Heuristics

Show Transcript

Heuristics

From the Heuristics tab, you can create your own heuristics. These will be used in an ensemble to label your dataset based on your knowledge of the dataset.

To manually create heuristics, fill out the fields in the Heuristic Editor shown here.

First, provide a name— this will appear later in the “Create New Ensemble” intake form on the Ensembles tab, so make sure it’s recognizable for later.

Next, choose the type of heuristic. You can choose either “Regular Expression” or “Zero-Shot to Target Class”.

Regular Expressions

If you choose Regular Expression, you can now select your target label from the dropdown. This list matches the labels found in the Specification.

Then, in the Pattern section, enter the regular expression that will be used to predict the selected label. 

Zero-Shot to Target Class

If you choose Zero-Shot to Target Class, the editor will change slightly.

Select your Target Label from the list of available classes found in the Specification.

Next, enter one or more positive classes. These contain the intended regular expression(s) that will predict examples that should be given the selected Target Label.

Finally, enter one or more negative classes, which will help predict examples that should not be given the selected Target Label. 

If the negative class is weighted more heavily in the example, then the zero-shot model will give the target class the negative label, and the heuristic will abstain from using the Target Label. Mileage may vary on how many instances of the negative label are required to change the target class to the negative label. 

When you are satisfied with your settings, click Save.

Generating Heuristics Automatically

You can also use Jaxon to generate heuristics automatically. 

Scroll down to the “Generate Heuristics” section, below manual entry. Select the training dataset which will be used to generate the heuristics. 

If you want, you can select an optional seed ensemble. This will generate heuristics to augment errors or abstentions as predicted by the selected ensemble on the training dataset.

Once you are satisfied with your choices, click “Generate Suite”. 

The heuristics will display like this once they have finished generating. This could take anywhere from a few minutes to several hours.

When your heuristics are complete, they can be tested for effectiveness. 

To the right of the Heuristic Editor, select a heuristic from the list. 

Type an example into the Test Custom Input box. When you click “Test”, the heuristic will then try to label this example with either the Target Label or “Abstain”. 

In this case, the sample example matches the target pattern, and so the target label is displayed.

In this case, the sample example does not match the target pattern, and so “abstain” is displayed.

Deleting Heuristics

If you want to delete a heuristic, select it from the list to the right of the Heuristic Editor. Click the trash can icon to the right of the heuristic. 

Click OK on the pop-up. This will permanently delete the heuristic and all dependent ensembles, and cannot be undone. Only proceed if you are certain you no longer need the heuristic.

Now you can go to the Ensembles tab and use these heuristics alongside your trained classical and neural models. 

Labeling

Show Transcript

Labeling

Jaxon lets multiple users add new labels to a dataset, or vote on existing labels, with varying degrees of supervision. 

To begin, select the dataset you want to label from the dropdown. All datasets within the selected project will be available.

Guided vs. Manual

Next, decide if you want to let Jaxon pick a labeling mode for you from the six modes available, or choose your own.

Use Guided to allow Jaxon to decide the optimal mode for your batch of examples. Use Manual in order to select your own labeling mode. Jaxon calculates a balanced distribution while reducing the overall cost of labeling. 

If you’re starting from a completely unlabeled dataset, we suggest you first apply a nominal number of labels to the dataset by using the Random Labeling option. Once there are enough examples of each class represented in the specification, active learning will kick in, further optimizing human time.

Labeling Modes

Jaxon has six modes that allow you to directly label or vote on labels for a dataset. 

With Random Labeling, examples are shown to you at random, ready to be labeled. 

With Active Labeling, an example is picked for the user based on the active learning under the hood. Active labeling is only possible in a dataset that already contains multiple labels per class—either in the dataset itself, or applied later with a mode like random labeling. As more labels are applied, the neural models improve. 

With Outlier Detection, examples that differ the most from the rest are identified and labeled first. While this can surface model errors, it can also surface the “corner cases” that would otherwise be lost to sampling from the more typical examples.

With Voting, labeled examples are shown to you at random, ready to be confirmed or rejected. On average, voting is three times faster than labeling.

With Active Voting, an unlabeled example is picked for the user with a Jaxon-suggested label attached. The user votes “Yes” if the label is correct, or “No” if not. Active voting is only possible in a dataset that already contains labels—either in the dataset itself, or first applied with a mode like random labeling. 

With Prompt User, Jaxon will show the user a label and ask them to create a completely novel example that matches the label. The selected labels are targeted by Jaxon to have the most impact on the resulting model. These new labeled examples are then added to the dataset.

Labeling

If you are Labeling, the current example will display here. You can pick a label from the list displayed on the right by clicking on it. This list of labels corresponds with the specification assigned to the current project. 

You can also label by beginning to type the name of the correct label into the text box. Either select it from the dropdown that appears by clicking on the label directly, or press the return key. 

Next, assign a confidence level to your label. The number of stars indicates how certain you are of this label. The default confidence level is two stars. 

After your label or labels have been selected, they will display above the example here. 

The label can be removed by clicking on this red X

If you don’t know which label to select, click “Suggestion”, and Jaxon will suggest one for you. 

If none of the labels match the example, click “No Match” to give the example the label of “None”. Note that only labels present in the specification can be applied to examples in the labeling tab. 

Once you are satisfied with the labels you’ve assigned to the example, press the return key or the “Submit” button. 

You can go back to previous examples or skip ahead with these arrows.

Voting

If you are Voting, the current example will display here. The assigned label is shown in this grey bar above the example. 

The list of classes displayed above the voting box, here, corresponds to the specification assigned to the current project.

Decide if you agree or disagree with the label assigned to this example. Next, assign a confidence level to your assessment of the label. This indicates how certain you are that your vote is correct. If you agree with the assigned label, select Yes to continue. If you disagree with the assigned label, select No.

Go back to previous examples or skip ahead using these arrows.

Generative AI Demo with Modzy

Show Transcript

BRAD:

 So we’re gonna shift to an actual demonstration that leverages both Jaxon and Modzy to solve a real world problem in contact center operations. So before I hand it over to Greg, I wanted to set the stage quickly for what the demo will include.

So when we think about contact center operations, there’s tons of data that the operators and agents deal with on a day-to-day basis. Audio, data, text data, data from both direct and indirect customers feedback, interactions, and all this data really makes contact centers a ripe area for AI; things like speech transcription, human in the loop learning, voices of customer NLP, and even generative AI. 

So what we’re gonna show today is a fusion between the Jaxon and MaModzyzi platform to enable an end-to-end holistic solution starting with data science and pipelining with Jaxon. And Greg will show you how we’ll take an open source dataset for the purpose of this demo, leverage Jaxon’s platform for synthetic data labeling, and a model training component will then export that model into Modzy, where, at Modzy from a platform level, we’ll serve those models through scalable APIs, which really enable a whole plethora of integrations. And what we’re gonna show today is an integration with Qlik that can build custom AI powered dashboards directly and deliver those to the hands of the agents and the operators.

And then finally, very importantly, when we’re talking about generative AI in some of these use cases, is that monitoring component. So what we’ll be able to do is take predictions that are stored within Modzy, map those to the production data, and import back into Jaxon. So that’s the general high level flow.

We’ll reference this architecture a couple times throughout. Um, so with that, I will pass it over to Greg to kick off the demo. 

GREG:

All right, thanks Brad. All right, so I’m gonna take you guys through a little bit of how the sausage is made in terms of creating a model that you might wanna deploy to Modzy.

And one of the keys here is that we’re trying to make these models very easy to create and then to update and improve, and there are an awful lot of different tools in the Swiss Army knife, as it were, of tooling here, but I’m gonna take a particular focus on the bits of this that involve synthetic data and generative approaches—just to try to keep this contained to a few minutes. 

So we mentioned that we’re doing a customer support application, and more specifically what we’ve done is actually scraped up a bunch of public tweets sent to McDonald’s, and we’re trying to categorize these and try to figure out which ones fall into different categories.

And in particular, we’re gonna be worried about the issues, people reporting legitimate issues… rather than, as we go through this, you’ll find in the data that there’s an awful lot of people shouting out their favorite food and that sort of thing.

Or bring back the McRib, bring back szechuan sauce, and all that good stuff. So that said, let’s see what we can do with some tweets that we scraped up. 

So before we get into the models, I wanna just show briefly that there are an awful lot of copies of what look like data sets. There are probably a couple of dozen datasets here—all of these are different derivations of the original data. Certainly we do things initially like splitting off a training and validation set. But when we get into the business of synthesizing new examples and synthesizing new labels and data sets, we want to keep all of these separate so that we can track some lineage and history and keep everything clean. 

So I just want you to see how much variety that there ends up being, all derived from a single original source dataset, in terms of different ways that we can train. And we’ll dig into the gory details of some of these in just a moment.

So, first, let’s take a look at just a clean, original unaugmented data set. And so here’s a simple example—I’ll clear this out from an earlier go round (I’ll show you how this works in just a moment). We have a raw tweet right here. I just want McDonald’s in the nap right now, so on. And I can pick from amongst the different examples that we have here.

I can take a look, and here are all the different classes that we might wanna label. So I can do this by hand, but there are a whole lot of options that I have in terms of how might I project—right now I’m talking about using either humans or a combination of humans with generative prompts to come up with different potential labels that I might want for this particular example. 

So when we think about using generative AI for this specific use case of preparing training data, there are two sides to it. We have the X’S and the Y’s, if you will. The originals, the X’s, being the original data or the features derived from the data, and the Y’s being our labeled annotations.

And there’s value to synthesizing either side of it, depending largely on the nature of your data. If you have a big pile of data—usually, the assumption is that data’s cheap, but it’s expensive to get annotated labels. You need humans in that. Then you get into the business of needing to synthesize pseudo labels, or weak labels, or a lot of different ways that we can get into doing that.

But on the example side, sometimes you either just don’t have examples, or they’re expensive to get, or you have them, but they’re such heavy skew that it’s really difficult to get examples for some of the sparser areas of the dataset, which inevitably are the ones that you’re really looking for when you’re trying to sift needles out of haystacks into really common patterns. 

So you get into different types of generative AI. And one of the themes I’ll come back to is—and I’ll show you some actual synthesized data in just a moment—is that accuracy is not always what you’re after, but rather usefulness. 

I have a couple of mantras I live by in my machine learning world.

One of them is that data and models are not separable. Models are—you can think of as just views on a data set. And then the other thing I like is the old saw about models, good old George Box: All models are wrong, some of them are useful. So apply those things, and it brings you back to data.

All data is wrong, but some of it’s useful. You know, we worry so much about where are the errors in synthetic data, and we forget that the data itself and the human annotations are also error-laden, and that’s okay. We just need to make something useful as defined by how predictive our model is on the actual data that we want it to predict. And we’ll come back to that as well. 

So you can see that this example I pulled up has a proposed label of suggestions, and I can go with that. A lot of different means of creating things. I can also pull up an unlabeled example. I can ask for a new suggestion on this.

So what we’re actually doing now is we’re actually going back and asking a generative AI of some sort. In this case, we just have a  little BART model, which for those who aren’t familiar, is pretty similar in spirit to the GPT3 and the big language models, but small and self-contained, and this can fit on a single desktop.

There’s, of course, trade-offs in terms of the accuracy and the capability, which brings it back to: is it useful? Do you need all the heft of GPT3 in order to augment your data in a useful way? Eh, sometime, but not necessarily. So just before I move on to more of the model building, I’m gonna show an example of, let’s pick a good one here, of what some augmented training might look like.

So this example is a completely synthetic one. It would be based on an original, real example, just to get the theme and the style and even some of the content. But some of this is synthetically generated. And then there are a lot of ways you can do it. You can go to things like the big language models and the big image models and the things that have captured our attention, but if you’re paying close attention and reading the fine print on some of the slides, they’re also really simple techniques; doing things as simple as swapping words around and substituting synonyms. And what’s interesting is that the seemingly simple things that you wouldn’t even think of as being quote unquote AI, when you start combining them in clever ways together, you can get some really interesting emergent properties to them. And it all mixes up and causes the same effect. So deep learning and transformers are awesome and they’re changing the world, but they are not—never have been the only way to get there, and ultimately we’re after effect. So it’s a good mix of the new and the old, the complex and the simple, and what we’re really looking for is effect. 

Just one more example of a way that we can use generative AI to create labels quickly. I’m showing you another means that we have here where I can create a zero shot classifier that’s not the thing we’re trying to train here. 

So instead of worrying about the different themes in our food—is this a chicken sandwich? Is this an issue being reported? Is this a general rave? I can create—I’m just gonna separate out morning, noon, and night.

That’s my zero shot classification. If it’s morning, I’m gonna label this as breakfast, and if it’s noon or night, it must be something else. And I can even test out a couple of things. I can say hash brown and—nobody let me do something [like] cheating here, like putting the words “in the morning”, but hash brown’s first thing, what will we get? Will we get breakfast? We do. It’s figured out that that’s more associated with morning than noon or night. And notice that I haven’t pre-trained any models here. This is just using a—this is the same BART model, just being crafted to a custom zero shot. Let’s try something else. You know, burger for dinner. And we get an abstention, which in this case is our way of saying it wasn’t breakfast, it was something else, since I’m really just trying to identify breakfast versus not breakfast. 

So one more angle of how to take the generative options. Technically, we’re first generating this quick classification model, and then we’re using that to generate a decision on a particular positive or negative pseudo label here. And that’s a great way to kind of bring this thing down as not the end product, but rather a tool in a greater chain. 

So, cutting to the chase now, we’re gonna take a look at actually creating models. And so here’s a list of different models that we’re able to generate with the system using different combinations, exploring the parameter space as well as some of those different dataset options that I showed you earlier. 

And just very quickly, I’ll show you; here’s a very first naive approach, and we get an okay DistilBERT model. This one is intentionally a baseline. Nothing fancy here. This is just our original human annotated small set, and it’s pretty good at some classes, lacking in some others.

And in particular, the issue class is the one that we’re most worried about. Um, so we can see it’s not so great at that and a couple of others, and we wanna look to… how can we improve that?

So we can do a couple of things. So for example, here we’ve taken the exact same model architectures as DistilBERT. We’re starting from the, almost the same data set, but what we’ve done is we’ve added in some of those emergent augmentations I mentioned. So this used only the simple ones, but it did some clever things during training to be able to capture some of those emergent behaviors, doing this on the fly in this particular case. 

And you can see that we’re able to actually get quite a boost in terms of overall F score. And we’ve improved our three really troublesome classes, at least somewhat, which is what we’re most interested in. And we can, however, do even better. 

So one, one more I’ll show you here is we can, and full disclosure, this one has happened to jump up from DistilBERT to RoBERTa.

We are using a little bit bigger model, a few more parameters, but notice that these two that I’m highlighting, and I’ll zoom in on one, are now using not just what we did before, but also some of the augmented data vis-a-vis the actual full-on data set synthesis action. So in this case, we’ve doubled the size of the data set, half synthetic data, half real, and this winner where we’re up to 80% now, we actually have two times the augmented data is the original data. And as you can see, things are looking quite a bit better, even amongst our classes of concern. 

So I’m going, I’m going to stop the demo there. I could keep going on this stuff for a couple of hours, but I think I would lose the audience.

So I’m gonna turn it back over to you at this point, Brad.

BRAD: 

Thanks, Greg. So just to conclude the demo, a couple takeaways that we’d like to highlight here. And the first is that synthetic data generation really enables more effective dataset augmentation and really cuts down the data science prep.

It’s the idea that Scott and Greg talked about, that data scientists should be involved in the design of the pipeline, not the pipeline itself. Right. The second is that solutions like Jaxon and Modzy make generative AI models and models built using generative AI techniques, right? accessible beyond just research based applications.

We saw that today, through our kind of automatic integration where you take an exported model from Jaxon, import it to Modzy, you can build all sorts of custom integrations that give these solutions directly to the end operators, the end analysts, to consume in a software that they’re used to working with.

And then finally, you can’t really have any sort of AI solution end-to-end holistically, whether it’s generative AI or not, without model monitoring. So robust model monitoring and that human in the loop feedback is really essential to making sure that your models are continuously maintaining that optimal performance.