Skip to main content

Johnsnowlabs

Gain access to the johnsnowlabs ecosystem of enterprise NLP libraries with over 21.000 enterprise NLP models in over 200 languages with the open source johnsnowlabs library. For all 24.000+ models, see the John Snow Labs Model Models Hub

Installation and Setup

pip install johnsnowlabs

To [install enterprise features](https://nlp.johnsnowlabs.com/docs/en/jsl/install_licensed_quick, run:

# for more details see https://nlp.johnsnowlabs.com/docs/en/jsl/install_licensed_quick
nlp.install()

You can embed your queries and documents with either gpu,cpu,apple_silicon,aarch based optimized binaries. By default cpu binaries are used. Once a session is started, you must restart your notebook to switch between GPU or CPU, or changes will not take effect.

Embed Query with CPU:

document = "foo bar"
embedding = JohnSnowLabsEmbeddings('embed_sentence.bert')
output = embedding.embed_query(document)

Embed Query with GPU:

document = "foo bar"
embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','gpu')
output = embedding.embed_query(document)

Embed Query with Apple Silicon (M1,M2,etc..):

documents = ["foo bar", 'bar foo']
embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','apple_silicon')
output = embedding.embed_query(document)

Embed Query with AARCH:

documents = ["foo bar", 'bar foo']
embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','aarch')
output = embedding.embed_query(document)

Embed Document with CPU:

documents = ["foo bar", 'bar foo']
embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','gpu')
output = embedding.embed_documents(documents)

Embed Document with GPU:

documents = ["foo bar", 'bar foo']
embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','gpu')
output = embedding.embed_documents(documents)

Embed Document with Apple Silicon (M1,M2,etc..):


```python
documents = ["foo bar", 'bar foo']
embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','apple_silicon')
output = embedding.embed_documents(documents)

Embed Document with AARCH:


```python
documents = ["foo bar", 'bar foo']
embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','aarch')
output = embedding.embed_documents(documents)

Models are loaded with nlp.load and spark session is started with nlp.start() under the hood.


Was this page helpful?