Redivis Documentation
API DocumentationRedivis Home
  • Introduction
  • Redivis for open science
    • FAIR data practices
    • Open access
    • Data repository characteristics
    • Data retention policy
    • Citations
  • Guides
    • Getting started
    • Discover & access data
      • Discover datasets
      • Apply to access restricted data
      • Create a study
    • Analyze data in a workflow
      • Reshape data in transforms
      • Work with data in notebooks
      • Running ML workloads
      • Example workflows
        • Analyzing large tabular data
        • Create an image classification model
        • Fine tuning a Large Language Model (LLM)
        • No-code visualization
        • Continuous enrollment
        • Select first/last encounter
    • Export & publish your work
      • Export to other environments
      • Build your own site with Observable
    • Create & manage datasets
      • Create and populate a dataset
      • Upload tabular data as tables
      • Upload unstructured data as files
      • Cleaning tabular data
    • Administer an organization
      • Configure access systems
      • Grant access to data
      • Generate a report
      • Example tasks
        • Emailing subsets of members
    • Video guides
  • Reference
    • Your account
      • Creating an account
      • Managing logins
      • Single Sign-On (SSO)
      • Workspace
      • Studies
      • Compute credits and billing
    • Datasets
      • Documentation
      • Tables
      • Variables
      • Files
      • Creating & editing datasets
      • Uploading data
        • Tabular data
        • Geospatial data
        • Unstructured data
        • Metadata
        • Data sources
        • Programmatic uploads
      • Version control
      • Sampling
      • Exporting data
        • Download
        • Programmatic
        • Google Data Studio
        • Google Cloud Storage
        • Google BigQuery
        • Embedding tables
    • Workflows
      • Workflow concepts
      • Documentation
      • Data sources
      • Tables
      • Transforms
        • Transform concepts
        • Step: Aggregate
        • Step: Create variables
        • Step: Filter
        • Step: Join
        • Step: Limit
        • Step: Stack
        • Step: Order
        • Step: Pivot
        • Step: Rename
        • Step: Retype
        • Step: SQL query
        • Variable selection
        • Value lists
        • Optimization and errors
        • Variable creation methods
          • Common elements
          • Aggregate
          • Case (if/else)
          • Date
          • DateTime
          • Geography
          • JSON
          • Math
          • Navigation
          • Numbering
          • Other
          • Statistical
          • String
          • Time
      • Notebooks
        • Notebook concepts
        • Compute resources
        • Python notebooks
        • R notebooks
        • Stata notebooks
        • SAS notebooks
        • Using the Jupyter interface
      • Access and privacy
    • Data access
      • Access levels
      • Configuring access
      • Requesting access
      • Approving access
      • Usage rules
      • Data access in workflows
    • Organizations
      • Administrator panel
      • Members
      • Studies
      • Workflows
      • Datasets
      • Permission groups
      • Requirements
      • Reports
      • Logs
      • Billing
      • Settings and branding
        • Account
        • Public profile
        • Membership
        • Export environments
        • Advanced: DOI configuration
        • Advanced: Stata & SAS setup
        • Advanced: Data storage locations
        • Advanced: Data egress configuration
    • Institutions
      • Administrator panel
      • Organizations
      • Members
      • Datasets
      • Reports
      • Settings and branding
    • Quotas and limits
    • Glossary
  • Additional Resources
    • Events and press
    • API documentation
    • Redivis Labs
    • Office hours
    • Contact us
    • More information
      • Product updates
      • Roadmap
      • System status
      • Security
      • Feature requests
      • Report a bug
Powered by GitBook
On this page
  • Overview
  • Base image and dependencies
  • Working with tabular data
  • Working with geospatial data
  • Working with larger tables
  • Working with unstructured data files
  • Creating output tables
  • Storing files

Was this helpful?

Export as PDF
  1. Reference
  2. Workflows
  3. Notebooks

Python notebooks

Last updated 6 months ago

Was this helpful?

Overview

Python notebooks provide a mechanism to interface between the python scientific stack and data on Redivis.

As a general workflow, you'll use the to load data from the table(s) in your workflow, and then leverage python and its ecosystem to perform your analyses. You can optionally from your notebook, which can then be used like any other table in your workflow.

The specific approaches to working with data in a notebook will be informed in part by the size and types of data that you are working with. Some common approaches are outlined below, and you can consult the full redivis-python docs for comprehensive information:

Base image and dependencies

To further customize your compute environment, you can specify various dependencies by clicking the Dependencies button at the top-right of your notebook. Here you will see three tabs: Packages, pre_install.sh, and post_install.sh.

Use packages to specify the specific python packages that you would like to install via PIP. When adding a new package, it will be pinned to the latest version of that package, but you can specify another version if preferred.

For more complex dependency management, you can also specify shell scripts under pre/post_install.sh. These scripts are executed on either side of the package installation, and are used to execute arbitrary code in the shell. Common use cases might include using apt to install system packages (apt-get update && apt-get install -y <package>), or using mamba to install from conda, which can be helpful for certain libraries (mamba install <package>).

For notebooks that reference restricted data, internet will be disabled while the notebook is running. This means that the dependencies interface is the only place from which you can install dependencies; running pip install ... within your notebook will fail.

Moreover, it is strongly recommended to always install your dependencies through the dependencies interface (regardless of whether your notebook has internet access), as this provides better reproducibility and documentation for future use.

Working with tabular data

When loading tabular data into your notebook, you'll typically bring it in as some sort of data frame. Specifically, you can load your data as:

The specific type of data frame is up to your preference, though there may be performance and memory implications that will matter for larger tables.

table = redivis.table("_source_")

pandas_df = table.to_pandas_dataframe(
  # max_results,      -> optional, max records to load
  # variables=list(), -> optional, a list of variables
  # ... consult the redivis-python docs for additional args
)

# other methods accept the same arguments, other than dtype_backend
dask_df = table.to_dask_dataframe()
polars_lf = table.to_polars_lazyframe()
arrow_table = table.to_arrow_table()
arrow_dataset = table.to_arrow_dataset()

# print first 10 rows
any_df.head(10)

Which data frame should I pick?

Each library has its own interface for analyzing data, and some may be better suited to your analytical needs. It is also easy to interchange between different data frame types, so you need not pick just one. But to offer some guidance:

  • Keep it standard: pandas

  • Parallel processing: dask

  • Fast new kid on the block: polars

  • Data doesn't fit in memory: pyarrow.Dataset, dask, polars

Working with geospatial data

If your table contains more than one geography variable, the first variable will be chosen as the geometry. You can explicitly specify the geography variable via the geography_variable parameter.

If you'd prefer to work with your geospatial data as a string, you can use any of the other table.to_* methods. In these cases, the geography variable will be represented as a WKT-encoded string.

table = redivis.table("_source_") # a table with a geography variable

geo_df = table.to_geopandas_dataframe(
  # geography_variable -> optional, str. If not specified, will be first geo var in the table
)
geo_df.explore() # visualize it!

Working with larger tables

Typically, tabular data is loaded into memory for analysis. This is often the most performant option, but if your data exceeds available memory, you'll need to consider other approaches for working with data at this scale.

"Too big for memory" will vary significantly based on the types of analyses you'll be doing, but as a very rough rule of thumb, you should consider these options once your table(s) exceed 1/10th of the total available memory.

Often, the best solution is to limit the amount of data that is coming into your notebook. To do so, you can:

  • Select only specific variables from a table by passing the variables=list(str) argument.

  • Pre-process data as it is loaded into your notebook, via the batch_preprocessor argument.

If your data is still pushing memory limits, there are two primary options. You can either store data on disk, or process data as a stream:

Storing data on disk

Hard disks are often much larger than available memory, and by loading data first to disk, you can significantly increase the amount of data available in the notebook. Moreover, modern columnar data formats support partitioning and predicate pushdown, allowing us to perform highly performant analyses on these disk-backed dataframes.

dask_df = redivis.table("test_scores").to_dask_dataframe()

df = df[df.grade == 9]                        # Select a subsection
result = df.groupby("teacher").score.mean()   # Reduce to a smaller size
result = result.compute()                     # Convert to pandas dataframe
polars_lf = redivis.table("test_scores").to_polars_lazyframe()

polars_lf.filter("grade" == 10)     # Select a subsection
    .group_by("teacher")            # Reduce to a smaller size
    .mean()                         
    .collect()                      # Convert to polars.DataFrame              
import pyarrow.compute as pc

arrow_ds = redivis.table("test_scores").to_arrow_dataset()
arrow_ds.filter(pc.field("grade") == 10)              

All three of these libraries also support various forms of batched processing, which allows you to process your data similar to the streaming methodology outlined below. While it will generally be faster to just process the stream directly, it can be helpful to first load a table to disk as you experiment with a streaming approach:

dask_df = redivis.table("_source_").to_dask_dataframe()
dask_df.apply(process_record, axis=1)
polars_lf = redivis.table("_source_").to_polars_lazyframe()

polars_lf.map_batches(process_record_batch)
arrow_ds = redivis.table("_source_").to_arrow_dataset()

for batch in arrow_ds.to_batches():
    process_record_batch(batch)

Streaming data

batch_iterator = redivis.table("test_scores").to_arrow_batch_iterator()

count = 0
total = 0
for batch in batch_iterator:
    # batch is an instance of pyarrow.RecordBatch -> https://arrow.apache.org/docs/python/generated/pyarrow.RecordBatch.html
    # Call batch.to_pandas() to convert to a pandas dataframe
    scores = batch.column("scores")
    count += len(scores)
    total += sum(scores)

print(f"The average of all test cores was {total/count}")

Working with unstructured data files

# e.g., assume we have a source table representing thousands of .png files
images_table = redivis.table("_source_")

# download all the images to a local directory for further processing
images_table.download_files(path="/path/to/dir")

# alternatively, process the images iteratively
# f is an instance of redivis.File(). See API docs for all available methods
for f in images_table.list_files():
    # read in the file to memory
    file_bytes = f.read() 
    
    # for larger files, you might want to process as the data as a stream,
    #   similarly to opening the file on local disk
    io_stream = f.stream()

Creating output tables

To create an output table, use the redivis.current_notebook().create_output_table() method, passing in any of the following as the first argument:

  • A string file path to any parquet file

Redivis will automatically handle any type inference in generating the output table, mapping your data type to the appropriate Redivis type.

If an output table for the notebook already exists, by default it will be overwritten. You can pass append=True to append, rather than overwrite, the table. In order for the append to succeed, all variables in the appended table, which are also present in the existing table, must have the same type.

# Read table into a pandas dataframe
df = redivis.table('_source_').to_pandas_dataframe()

# Perform various data manipulation actions
df2 = df.apply(some_processing_fn)

# Create an output table with the contents of this dataframe
redivis.current_notebook().create_output_table(df2)

# We can also append content to the output table, to process in batches
df3 = df.apply(some_other_fn)
redivis.current_notebook().create_output_table(df3, append=True)

Storing files

As you perform your analysis, you may generate files that are stored on the notebook's hard disk. There are two locations that you should write files to: /out for persistent storage, and /scratch for temporary storage.

Any files written to persistent storage will be available when the notebook is stopped, and will be restored to the same state when the notebook is run again. Alternatively, any files written to temporary storage will only exist for the duration of the current notebook session.

# Persist files in /out
df.to_csv("/out/data.csv")

# Store temporary files in /scratch
df.to_csv("/scratch/temp_data.csv")

Python notebooks on Redivis are based off the (version ), which contains a variety of common scientific packages for Python running on Ubuntu 24.04. The latest version of the is also installed. To view all installed python packages, run pip list from within a running notebook.

If your table contains geospatial variable(s), you can take advantage of geopandas to utilize GIS functions and visualization. Calling on a Redivis table with a variable of the geography type will return an instance of a , with that variable specified as the data frame's geometry variable.

Leverage to first filter / aggregate your data

Pre-filter data via a SQL query from within your notebook, via the .

The general approach for these disk-backed dataframes is to lazily evaluate our computation, only pulling content into memory after all computations have been applied, and ideally the data has been reduced. The methods , , and all return a disk-backed dataframe:

By streaming data into your notebook, you can process data in batches of rows, avoiding the need to load more than a small chunk of data into memory at a time. This approach is the most scalable, since it won't be limited by available memory or disk. For this, we can use the method:

Unstructured data files on Redivis are represented by , or specifically, tables that contain a file_id variable. If you have file index tables in your workflow, you can analyze the files represented in those tables within your notebook. Similarly to working with tabular data, we can either download all files, or iteratively process them:

Redivis notebooks offer the ability to materialize notebook outputs as a new in your workflow. This table can then be processed by transforms, read into other notebooks, exported, or even .

jupyter/pytorch-notebook base image
cuda12-python-3.12
redivis-python library
A pandas.DataFrame
A dask.Dataframe
A polars.LazyFrame
A pyarrow.Table
A pyarrow.Dataset
to_geopandas_dataframe()
geopandas.DataFrame
transforms
redivis.query() method
dask groupby documentation >
polars groupby documentation >
pyarrow.Dataset.filter documentation >
dask.DataFrame.apply documentation >
polars.LazyFrame.map_batches documentation >
pyarrow.Dataset.to_batches documentation >
Table.to_arrow_batch_iterator()
Table.download_files()
Table.list_files()
redivis.File
table node
re-imported into a dataset
A pandas.DataFrame
A dask.Dataframe
A polars.LazyFrame
A polars.DataFrame
A pyarrow.Table
A pyarrow.Dataset
redivis-python library
create an output table
to_dask_dataframe()
to_polars_lazyframe()
to_arrow_dataset()
redivis-pythonRedivis API
file index tables
redivis.Table
Logo