Community curators are granted access to a dev environment for testing and development of a data model. A member of Flipside's analytics team will need to grant you access, so please ask in the # 🌲 | community-curation channel on Discord something along the lines of:
Hi , I’m interested in doing data curation for Flipside, could you give me snowflake access please? I’d like my username to be:
Access to Snowflake is granted for the sole purpose of community curation and testing your models. This password is not to be shared with anyone. If you know someone who would like to contribute as well, we will credential them separately. If you would like to work with Flipside data in a Snowflake environment, please see the section on Data Shares and reach out separately.
If you are unfamiliar with dbt, we suggest creating a free account to dbt Cloud. dbt Labs has built an IDE for developing dbt models. Once the environment is set up with the proper credentials, connect to a fork of the model repository to begin editing or building you own. The cloud environment includes the option to preview the compiled SQL models so you can see output as you work. Additionally, the command line for running dbt includes built-in autocomplete for common dbt commands.
Note: if you are using dbt Cloud, you will need to fork the main repository and link your dbt Cloud environment to the fork.
Once set up, clone a copy of the repository of choice to your machine and checkout a branch to begin making your changes. Branch name should follow the convention:
git checkout -b community/my-new-model
We have included a Dockerfile in eligible repositories to handle the installation of dbt on your behalf.
- 3.Copy the details of
.envfile with your credentials. The environment details, like account and database, will be pre-filled for you. All you should need to replace is the below with your previously provided username and password.
- SF_USERNAME=<YOUR SNOWFLAKE USERNAME>SF_PASSWORD=<YOUR SNOWFLAKE PASSWORD>
- 4.Open a terminal window in the repository directory and run the command
make dbt-console. If successful, a Docker container should spin up, install dbt, and open a console for you to run dbt commands. The container will read your
.envfile and should be connected to operate on the community curation database.
- 5.Test your connection!
dbt debugto check installation.
dbt test -s core__fact_blocksto run a set of tests on the
<chain>.core.fact_blocksmodel in the community curation database to check your connection and credentials.
You are now ready to create your first data contribution! Read on for an example contribution guide which includes some dbt basics, or review the Model Standards for insight on how we structure our projects.