📈 Snorkel Intro Tutorial: Data Augmentation
In this tutorial, we will walk through the process of using transformation functions (TFs) to perform data augmentation.
Like the labeling tutorial, our goal is to train a classifier to YouTube comments as SPAM
or HAM
(not spam).
In the previous tutorial,
we demonstrated how to label training sets programmatically with Snorkel.
In this tutorial, we’ll assume that step has already been done, and start with labeled training data,
which we’ll aim to augment using transformation functions.
Data augmentation is a popular technique for increasing the size of labeled training sets by applying class-preserving transformations to create copies of labeled data points. In the image domain, it is a crucial factor in almost every state-of-the-art result today and is quickly gaining popularity in text-based applications. Snorkel models the data augmentation process by applying user-defined transformation functions (TFs) in sequence. You can learn more about data augmentation in this blog post about our NeurIPS 2017 work on automatically learned data augmentation.
The tutorial is divided into four parts:
- Loading Data: We load a YouTube comments dataset.
- Writing Transformation Functions: We write Transformation Functions (TFs) that can be applied to training data points to generate new training data points.
- Applying Transformation Functions to Augment Our Dataset: We apply a sequence of TFs to each training data point, using a random policy, to generate an augmented training set.
- Training a Model: We use the augmented training set to train an LSTM model for classifying new comments as
SPAM
orHAM
.
1. Loading Data
We load the Kaggle dataset and create Pandas DataFrame objects for the train
and test
sets.
The two main columns in the DataFrames are:
text
: Raw text content of the commentlabel
: Whether the comment isSPAM
(1) orHAM
(0).
For more details, check out the labeling tutorial.
from utils import load_spam_dataset
df_train, df_test = load_spam_dataset(load_train_labels=True)
# We pull out the label vectors for ease of use later
Y_train = df_train["label"].values
Y_test = df_test["label"].values
df_train.head()
author | date | text | label | video | |
---|---|---|---|---|---|
0 | Alessandro leite | 2014-11-05T22:21:36 | pls http://www10.vakinha.com.br/VaquinhaE.aspx... | 1 | 1 |
1 | Salim Tayara | 2014-11-02T14:33:30 | if your like drones, plz subscribe to Kamal Ta... | 1 | 1 |
2 | Phuc Ly | 2014-01-20T15:27:47 | go here to check the views :3 | 0 | 1 |
3 | DropShotSk8r | 2014-01-19T04:27:18 | Came here to check the views, goodbye. | 0 | 1 |
4 | css403 | 2014-11-07T14:25:48 | i am 2,126,492,636 viewer :D | 0 | 1 |
2. Writing Transformation Functions (TFs)
Transformation functions are functions that can be applied to a training data point to create another valid training data point of the same class. For example, for image classification problems, it is common to rotate or crop images in the training data to create new training inputs. Transformation functions should be atomic e.g. a small rotation of an image, or changing a single word in a sentence. We then compose multiple transformation functions when applying them to training data points.
Common ways to augment text includes replacing words with their synonyms, or replacing names entities with other entities.
More info can be found
here or
here.
Our basic modeling assumption is that applying these operations to a comment generally shouldn’t change whether it is SPAM
or not.
Transformation functions in Snorkel are created with the
transformation_function
decorator,
which wraps a function that takes in a single data point and returns a transformed version of the data point.
If no transformation is possible, a TF can return None
or the original data point.
If all the TFs applied to a data point return None
, the data point won’t be included in
the augmented dataset when we apply our TFs below.
Just like the labeling_function
decorator, the transformation_function
decorator
accepts pre
argument for Preprocessor
objects.
Here, we’ll use a
SpacyPreprocessor
.
from snorkel.preprocess.nlp import SpacyPreprocessor
spacy = SpacyPreprocessor(text_field="text", doc_field="doc", memoize=True)
import names
from snorkel.augmentation import transformation_function
# Pregenerate some random person names to replace existing ones with
# for the transformation strategies below
replacement_names = [names.get_full_name() for _ in range(50)]
# Replace a random named entity with a different entity of the same type.
@transformation_function(pre=[spacy])
def change_person(x):
person_names = [ent.text for ent in x.doc.ents if ent.label_ == "PERSON"]
# If there is at least one person name, replace a random one. Else return None.
if person_names:
name_to_replace = np.random.choice(person_names)
replacement_name = np.random.choice(replacement_names)
x.text = x.text.replace(name_to_replace, replacement_name)
return x
# Swap two adjectives at random.
@transformation_function(pre=[spacy])
def swap_adjectives(x):
adjective_idxs = [i for i, token in enumerate(x.doc) if token.pos_ == "ADJ"]
# Check that there are at least two adjectives to swap.
if len(adjective_idxs) >= 2:
idx1, idx2 = sorted(np.random.choice(adjective_idxs, 2, replace=False))
# Swap tokens in positions idx1 and idx2.
x.text = " ".join(
[
x.doc[:idx1].text,
x.doc[idx2].text,
x.doc[1 + idx1 : idx2].text,
x.doc[idx1].text,
x.doc[1 + idx2 :].text,
]
)
return x
We add some transformation functions that use wordnet
from NLTK to replace different parts of speech with their synonyms.
import nltk
from nltk.corpus import wordnet as wn
nltk.download("wordnet")
def get_synonym(word, pos=None):
"""Get synonym for word given its part-of-speech (pos)."""
synsets = wn.synsets(word, pos=pos)
# Return None if wordnet has no synsets (synonym sets) for this word and pos.
if synsets:
words = [lemma.name() for lemma in synsets[0].lemmas()]
if words[0].lower() != word.lower(): # Skip if synonym is same as word.
# Multi word synonyms in wordnet use '_' as a separator e.g. reckon_with. Replace it with space.
return words[0].replace("_", " ")
def replace_token(spacy_doc, idx, replacement):
"""Replace token in position idx with replacement."""
return " ".join([spacy_doc[:idx].text, replacement, spacy_doc[1 + idx :].text])
@transformation_function(pre=[spacy])
def replace_verb_with_synonym(x):
# Get indices of verb tokens in sentence.
verb_idxs = [i for i, token in enumerate(x.doc) if token.pos_ == "VERB"]
if verb_idxs:
# Pick random verb idx to replace.
idx = np.random.choice(verb_idxs)
synonym = get_synonym(x.doc[idx].text, pos="v")
# If there's a valid verb synonym, replace it. Otherwise, return None.
if synonym:
x.text = replace_token(x.doc, idx, synonym)
return x
@transformation_function(pre=[spacy])
def replace_noun_with_synonym(x):
# Get indices of noun tokens in sentence.
noun_idxs = [i for i, token in enumerate(x.doc) if token.pos_ == "NOUN"]
if noun_idxs:
# Pick random noun idx to replace.
idx = np.random.choice(noun_idxs)
synonym = get_synonym(x.doc[idx].text, pos="n")
# If there's a valid noun synonym, replace it. Otherwise, return None.
if synonym:
x.text = replace_token(x.doc, idx, synonym)
return x
@transformation_function(pre=[spacy])
def replace_adjective_with_synonym(x):
# Get indices of adjective tokens in sentence.
adjective_idxs = [i for i, token in enumerate(x.doc) if token.pos_ == "ADJ"]
if adjective_idxs:
# Pick random adjective idx to replace.
idx = np.random.choice(adjective_idxs)
synonym = get_synonym(x.doc[idx].text, pos="a")
# If there's a valid adjective synonym, replace it. Otherwise, return None.
if synonym:
x.text = replace_token(x.doc, idx, synonym)
return x
tfs = [
change_person,
swap_adjectives,
replace_verb_with_synonym,
replace_noun_with_synonym,
replace_adjective_with_synonym,
]
Let’s check out a few examples of transformed data points to see what our TFs are doing.
from utils import preview_tfs
preview_tfs(df_train, tfs)
TF Name | Original Text | Transformed Text | |
---|---|---|---|
0 | change_person | Check out Berzerk video on my channel ! :D | Check out Jennifer Selby video on my channel ! :D |
1 | swap_adjectives | hey guys look im aware im spamming and it piss... | hey guys look im aware im spamming and it piss... |
2 | replace_verb_with_synonym | "eye of the tiger" "i am the champion" seems l... | "eye of the tiger" "i be the champion" seems l... |
3 | replace_noun_with_synonym | Hey, check out my new website!! This site is a... | Hey, check out my new web site !! This site is... |
4 | replace_adjective_with_synonym | I started hating Katy Perry after finding out ... | I started hating Katy Perry after finding out ... |
We notice a couple of things about the TFs.
- Sometimes they make trivial changes (
"website"
to"web site"
for replace_noun_with_synonym). This can still be helpful for training our model, because it teaches the model to be invariant to such small changes. - Sometimes they introduce incorrect grammar to the sentence (e.g.
swap_adjectives
swapping"young"
and"more"
above).
The TFs are expected to be heuristic strategies that indeed preserve the class most of the time, but don’t need to be perfect. This is especially true when using automated data augmentation techniques which can learn to avoid particularly corrupted data points. As we’ll see below, Snorkel is compatible with such learned augmentation policies.
3. Applying Transformation Functions
We’ll first define a Policy
to determine what sequence of TFs to apply to each data point.
We’ll start with a RandomPolicy
that samples sequence_length=2
TFs to apply uniformly at random per data point.
The n_per_original
argument determines how many augmented data points to generate per original data point.
from snorkel.augmentation import RandomPolicy
random_policy = RandomPolicy(
len(tfs), sequence_length=2, n_per_original=2, keep_original=True
)
In some cases, we can do better than uniform random sampling.
We might have domain knowledge that some TFs should be applied more frequently than others,
or have trained an automated data augmentation model
that learned a sampling distribution for the TFs.
Snorkel supports this use case with a
MeanFieldPolicy
,
which allows you to specify a sampling distribution for the TFs.
We give higher probabilities to the replace_[X]_with_synonym
TFs, since those provide more information to the model.
from snorkel.augmentation import MeanFieldPolicy
mean_field_policy = MeanFieldPolicy(
len(tfs),
sequence_length=2,
n_per_original=2,
keep_original=True,
p=[0.05, 0.05, 0.3, 0.3, 0.3],
)
To apply one or more TFs that we’ve written to a collection of data points according to our policy, we use a
PandasTFApplier
because our data points are represented with a Pandas DataFrame.
from snorkel.augmentation import PandasTFApplier
tf_applier = PandasTFApplier(tfs, mean_field_policy)
df_train_augmented = tf_applier.apply(df_train)
Y_train_augmented = df_train_augmented["label"].values
print(f"Original training set size: {len(df_train)}")
print(f"Augmented training set size: {len(df_train_augmented)}")
Original training set size: 1586
Augmented training set size: 2486
We have almost doubled our dataset using TFs!
Note that despite n_per_original
being set to 2, our dataset may not exactly triple in size,
because sometimes TFs return None
instead of a new data point
(e.g. change_person
when applied to a sentence with no persons).
If you prefer to have exact proportions for your dataset, you can have TFs that can’t perform a
valid transformation return the original data point rather than None
(as they do here).
4. Training A Model
Our final step is to use the augmented data to train a model. We train an LSTM (Long Short Term Memory) model, which is a very standard architecture for text processing tasks.
Now we’ll train our LSTM on both the original and augmented datasets to compare performance.
from utils import featurize_df_tokens, get_keras_lstm
X_train = featurize_df_tokens(df_train)
X_train_augmented = featurize_df_tokens(df_train_augmented)
X_test = featurize_df_tokens(df_test)
def train_and_test(X_train, Y_train, X_test=X_test, Y_test=Y_test, num_buckets=30000):
# Define a vanilla LSTM model with Keras
lstm_model = get_keras_lstm(num_buckets)
lstm_model.fit(X_train, Y_train, epochs=5, verbose=0)
preds_test = lstm_model.predict(X_test)[:, 0] > 0.5
return (preds_test == Y_test).mean()
acc_augmented = train_and_test(X_train_augmented, Y_train_augmented)
acc_original = train_and_test(X_train, Y_train)
print(f"Test Accuracy (original training data): {100 * acc_original:.1f}%")
print(f"Test Accuracy (augmented training data): {100 * acc_augmented:.1f}%")
Test Accuracy (original training data): 86.0%
Test Accuracy (augmented training data): 91.6%
So using the augmented dataset indeed improved our model! There is a lot more you can do with data augmentation, so try a few ideas out on your own!