R notebooks
Overview
R notebooks provide a mechanism to interface between the R scientific stack and data on Redivis.
As a general workflow, you'll use the redivis-r library to load data from the table(s) in your workflow, and then leverage R and its ecosystem to perform your analyses. You can optionally create an output table from your notebook, which can then be used like any other table in your workflow.
The specific approaches to working with data in a notebook will be informed in part by the size and types of data that you are working with. Some common approaches are outlined below, and you can consult the full redivis-r docs for comprehensive information:
Base image and dependencies
R notebooks on Redivis are based off the jupyter/r-notebook base image (version r-4.4.1), which contains a variety of common scientific packages for R running on Ubuntu 24.04. The latest version of the redivis-r library is also installed. To view all installed R packages, execute the following:
tibble::tibble(
Package = names(installed.packages()[,3]),
Version = unname(installed.packages()[,3])
)
To further customize your compute environment, you can specify various dependencies by clicking the Dependencies button at the top-right of your notebook. Here you will see three tabs: Packages, pre_install.sh, and post_install.sh.
Use packages to specify the specific R packages that you would like to install. When adding a new package, it will be pinned to the latest version of that package, but you can specify another version if preferred. If a given package and version exists on conda, it will be installed from there, otherwise the package will be installed via R's devtools::install()
.
For more complex dependency management, you can also specify shell scripts under pre/post_install.sh
. These scripts are executed on either side of the package installation, and are used to execute arbitrary code in the shell. Common use cases might include using apt
to install system packages (apt-get update && apt-get install -y <package>
), using mamba
to install from conda, or executing R code to install additional dependencies. To execute R code in the shell, you should run:
R -e '
# R code here, e.g.:
library(devtools)
devtools::install_github("some-package")
'
Working with tabular data
When loading tabular data into your notebook, you'll typically bring it in as some sort of data frame. Specifically, you can load your data as:
The specific type of data frame is up to your preference, though there may be performance and memory implications that will matter for larger tables.
table <- redivis$table("_source_")
tidyverse_tibble <- table$to_tibble(
# max_results, -> optional, max records to load
# variables=list(), -> optional, a list of variables
# ... consult the redivis-R docs for additional args
)
# other methods accept the same arguments, other than dtype_backend
data_table <- table$to_data_table()
data_frame <- table$to_data_frame()
arrow_table <- table$to_arrow_table()
arrow_dataset <- table$to_arrow_dataset()
# print first 10 rows
head(any_df, 10)
Working with geospatial data
If your table contains geospatial variable(s), you can take advantage of the sf (simple features) package to utilize GIS functions and visualization. By default, calling Table$to_sf_tibble()
on a Redivis table with a variable of the geography type will return an instance of a SF tibble, with that variable specified as the corresponding geometry column.
If your table contains more than one geography variable, the first variable will be chosen as the geometry column. You can explicitly specify the geography variable via the geography_variable
parameter.
If you'd prefer to work with your geospatial data as a string, you can use any of the other table$to_*
methods. In these cases, the geography variable will be represented as a WKT-encoded string.
table <- redivis$table("_source_") # a table with a geography variable
sf_tbl <- table$to_sf_tibble(
# geography_variable -> optional, str. If not specified, will be first geo var in the table
)
plot(sf_tbl) # visualize it!
Working with larger tables
Typically, tabular data is loaded into memory for analysis. This is often the most performant option, but if your data exceeds available memory, you'll need to consider other approaches for working with data at this scale.
Often, the best solution is to limit the amount of data that is coming into your notebook. To do so, you can:
Leverage transforms to first filter / aggregate your data
Select only specific variables from a table by passing the
variables=list(str)
argument.Pre-filter data via a SQL query from within your notebook, via the redivis$query() method.
Pre-process data as it is loaded into your notebook, via the
batch_preprocessor
argument.
If your data is still pushing memory limits, there are two primary options. You can either store data on disk, or process data as a stream:
Storing data on disk
Hard disks are often much larger than available memory, and by loading data first to disk, you can significantly increase the amount of data available in the notebook. Moreover, modern columnar data formats support partitioning and predicate pushdown, allowing us to perform highly performant analyses on these disk-backed dataframes.
The general approach for these disk-backed dataframes is to lazily evaluate our computation, only pulling content into memory after all computations have been applied, and ideally the data has been reduced. The methods to_arrow_dataset()
returns a disk-backed dataframe that supports most dplyr methods:
library(dplyr)
arrow_dataset <- redivis$table("test_scores")$to_arrow_dataset()
arrow_dataset %>%
filter(grade == 9) %>% # Select a subsection
group_by(teacher) %>% # Reduce to a smaller size
summarize(avg_score = mean(score)) %>%
as_tibble() # Convert to tibble, data is only loaded into memory at this point
Arrow datasets also support batched processing, which allows you to process your data similar to the streaming methodology outlined below. While it will generally be faster to just process the stream directly, it can be helpful to first load a table to disk as you experiment with a streaming approach:
arrow_dataset <- redivis$table("test_scores")$to_arrow_dataset()
reader <- as_record_batches(arrow_dataset)
while (!is.null(batch <- reader$read_next_batch())){
# process arrow record_batch
}
arrow.RecordBatch documentation >
Streaming data
By streaming data into your notebook, you can process data in batches of rows, avoiding the need to load more than a small chunk of data into memory at a time. This approach is the most scalable, since it won't be limited by available memory or disk. To do so, we can use the Table$to_arrow_batch_reader()
method
batch_reader = redivis$table("test_scores")$to_arrow_batch_reader()
count <- 0
total <- 0
while (!is.null(batch <- reader$read_next_batch())){
# batch is an instance of arrow.RecordBatch -> https://arrow.apache.org/docs/r/reference/record_batch.html
scores <- batch$scores
count <- count + length(scores)
total <- sum(scores)
}
print(str_interp("The average of all test cores was ${total/count}"))
Working with unstructured data files
Unstructured data files on Redivis are represented by file index tables, or specifically, tables that contain a file_id
variable. If you have file index tables in your workflow, you can analyze the files represented in those tables within your notebook. Similarly to working with tabular data, we can either download all files, or iteratively process them:
# e.g., assume we have a source table representing thousands of .png files
images_table <- redivis$table("_source_")
# download all the images to a local directory for further processing
images_table$download_files(path="/path/to/dir")
# alternatively, process the images iteratively
# f is an instance of redivis.File(). See API docs for all available methods
for (f in images_table$list_files()){
# read in the file to memory
file_bytes = f$read()
# for larger files, you might want to process as the data as a stream,
# similarly to opening the file on local disk
f$stream(stream_callback_fn)
}
Creating output tables
Redivis notebooks offer the ability to materialize notebook outputs as a new table node in your workflow. This table can then be processed by transforms, read into other notebooks, exported, or even re-imported into a dataset.
To create an output table, use the redivis$current_notebook()$create_output_table()
method, passing in any of the following as the first argument:
A string file path to any parquet file
Redivis will automatically handle any type inference in generating the output table, mapping your data type to the appropriate Redivis type.
If an output table for the notebook already exists, by default it will be overwritten. You can pass append=TRUE
to append, rather than overwrite, the table. In order for the append to succeed, all variables in the appended table, which are also present in the existing table, must have the same type.
# Read table into a tibble
tbl = redivis$table('_source_')$to_tibble()
# Perform various data manipulation actions
tbl2 = tbl %>% mutate(...)
# Create an output table with the contents of this dataframe
redivis$current_notebook()$create_output_table(tbl2)
# We can also append content to the output table, to process in batches
tbl3 = tbl %>% filter(...)
redivis$current_notebook()$create_output_table(tbl3, append=True)
Storing files
As you perform your analysis, you may generate files that are stored on the notebook's hard disk. There are two locations that you should write files to: /out
for persistent storage, and /scratch
for temporary storage.
Any files written to persistent storage will be available when the notebook is stopped, and will be restored to the same state when the notebook is run again. Alternatively, any files written to temporary storage will only exist for the duration of the current notebook session.
# Persist files in /out
write.csv(df, "/out/data.csv", na="")
# Store temporary files in /scratch
write.csv(df, "/scratch/temp_data.csv", na="")
Last updated
Was this helpful?