Changelog¶
v2.0.0 (2024-07-31)¶
🛑 Breaking Changes¶
serviceRemove default values for mandatory arguments in aggregation methods such as aggregate_over. The value_column parameter must now be provided.
💡 Enhancements¶
lintingUsing ruff as the linter for the projectpackageUpgrade Pydantic to V2
⚠️ Deprecations¶
dependenciesDeprecation ofsasllibrary to support python 3.11
🐛 Bug Fixes¶
serviceFix feature metadata extraction throwing KeyError during feature info retrievalsessionHandle schema and table listing for catalogs without information schema in DataBricks Unity.
v1.1.4 (2024-07-09)¶
💡 Enhancements¶
serviceValidate aggregation method is supported in more aggregation methodsserviceAdded support for count_distinct aggregation method
🐛 Bug Fixes¶
numpyExplicitly Set lower bound for numpy version to <2.0.0
v1.1.3 (2024-07-05)¶
💡 Enhancements¶
workerSpeed up table description by excluding top and most frequent values for float and timestamp columns.
🐛 Bug Fixes¶
apiFix materialized table download failure.
v1.1.2 (2024-06-25)¶
💡 Enhancements¶
serviceImprove feature job efficiency for latest aggregation features without windowserviceAdd support for updating feature store details
🐛 Bug Fixes¶
serviceFix error when using request column as key in dictionary feature operations
v1.1.1 (2024-06-10)¶
💡 Enhancements¶
sdk-apiUse workspace home as default config path in databricks environment.
v1.1.0 (2024-06-08)¶
🛑 Breaking Changes¶
sdk-apiSkip filling null value by default for aggregated features.serviceRename FeatureJobSetting attributes to match the new naming convention.
💡 Enhancements¶
servicePerform sampling operations without sorting tablesserviceSupport offset parameter in aggregate_over and forward_aggregateserviceAdd default feature job settings to the SCDTable.dependenciesBumpedfreewareto 0.2.18 to support new feature job settingsserviceRelax constraint that key has to be a lookup feature in dictionary operationsdependenciesbump snowflake-connector-python
🐛 Bug Fixes¶
serviceFix incorrect type casting in most frequent value UDF for Databricks Unity
v1.0.3 (2024-05-21)¶
💡 Enhancements¶
serviceBackfill only required tiles for offline store tables when enabling a deploymentserviceFix view and table describe method error on invalid datetime valuesserviceCast type for features with float dtypedockerBump base docker image to python 3.10apiIntroduce databricks accessor to deployment API object.apiSupport specifying the target column when creating an observation table.- This change allows users to specify the target column when creating an observation table.
- The target column is the column that contains the target values for the observations.
- The target column name must match a valid target namespace name in the catalog.
- The primary entities of the target namespace must match that of the observation table.
serviceRun feature computation queries in parallelserviceCast features with integer dtype BIGINT explicitly in feature queriesapiUse async task for table / view / column describe to avoid timeout on large datasets.gh-actionsMigration to pytest-split to github actions- Databricks tests
- Spark tests
serviceAvoid repeated graph flattening in GraphInterpreter and improve tile sql generation efficiencyserviceSkip casting data to string in describe query if not requiredsdk-apiPrevent users from creating a UDF feature that is not deployable.serviceRun on demand tile computation concurrentlyserviceValidate point in time and entity columns do not contain missing values in observation tableserviceValidate internal row index column is valid after features computationserviceImprove precomputed lookup feature tables handlingserviceSupport creating Target objects using forward_aggregate_asatserviceHandle duplicate rows when looking up SCD and dimension tablesserviceCalculate entropy using absolute count valuesmodelsLimit asset names to 255 characters in length to ensure they can be referenced as identifiers in SQL queries- This change ensures that asset names are compatible with the maximum length of identifiers in SQL queries + This change will prevent errors when querying assets with long names
dependenciesBump dependencies to latest version- snowflake-connector-python
- databricks-sdk
- databricks-sql-connector
apiAdd more associated objects to historical feature table objects.serviceCreate tile cache working tables in parallel
⚠️ Deprecations¶
redisDropping aioredis as redis client library
🐛 Bug Fixes¶
serviceFix offline store feature table name construction logic to avoid name collisionsserviceFix ambiguous column name error when concatenating serving namesserviceFix target SCD lookup code definition generation bug when the target name contains special characters.depsPinning pyopenssl to 24.X.X as client requirementserviceDatabricks integration is not working as expected.serviceFix KeyError caused by precomputed_lookup_feature_table_info due to backward compatibility issuesessionSet active schema for the snowflake explicitly. The connector does not set the active schema specified.serviceFix an error when submitting data describe task payloadsessionFix dtype detected wrongly for MAP type in Spark sessionapiMake dtype mandatory when create a target namespacesessionFix DataBricks relative frequency UDF to return None when all counts are 0serviceHandle missing values in SCD effective timestamp and point in time columnssessionFix DataBricks entropy UDF to return 0 when all counts are 0udfFix division by zero in count dict cosine similarity UDFsdependenciesBumping vulnerable dependencies- orjson
- cryptography
- ~~fastapi~~ (Need to bump to pydantic 2.X.X)
- python-multipart
- aiohttp
- jupyterlab
- black
- pymongo
- pillow
sessionSet ownership of created tables to the session group. This is a fix for the issue where the tables created cannot be updated by other users in the group.
v1.0.2 (2024-03-15)¶
🐛 Bug Fixes¶
serviceDatabricks integration fix
v1.0.1 (2024-03-12)¶
💡 Enhancements¶
apiSupport description specification during table creation.apiCreate api to manage online storessessionSpecify role and group in Snowflake and Databricks details to enforce permissions for accessing source and output tablesserviceSimplify user defined function route creation schemaonline_servingImplement FEAST offline stores for Spark Thrift and DataBricks for online serving supportserviceCompute data description in batches of columnsserviceSupport offset parameter for aggregate_asatprofileCreate a profile from databricks secrets to simplify access from a Databricks workspace.serviceImprove efficiency of feature table cache checks for saved feature listssessionAdd client_session_keep_alive to snowflake connector to keep the session aliveserviceSupport cancellation for historical features table creation task
🐛 Bug Fixes¶
serviceUpdates output variable type of count aggregation to be integer instead of floatserviceFix FeatureList online_enabled_feature_ids attribute not updated correctly in some casessessionFix snowflake session using wrong role if the user's default role does not match role in feature store detailssessionFix count dictionary entropy UDF behavior for edge casesdeploymentFix getting sample entity serving names for deployment fails when entity has null valuesserviceFix ambiguous column name error when using SCD lookup features with different offsets
v1.0.0 (2023-12-21)¶
💡 Enhancements¶
sessionImplement missing UDFs for DataBricks clusters that support Unity Catalog.storageSupport azure blob storage for file storage.
🐛 Bug Fixes¶
serviceFixes a bug where the feature saving would fail if the feature or colum name contains quotes.deploymentFix an issue where periodic tasks were not disabled when reverting a failed deployment
v0.6.2 (2023-12-01)¶
🛑 Breaking Changes¶
apiSupport using observation tables in feature, target and featurelist preview- Parameter
observation_setinFeature.preview,Target.previewandFeatureList.previewnow acceptsObservationTableobject or pandas dataframe - Breaking change: Parameter
observation_tableinFeatureList.compute_historical_feature_tableis renamed toobservation_set feature_listChange feature list catalog output dataframe column name fromprimary_entitiestoprimary_entity
💡 Enhancements¶
databricks-unityAdd session for databricks unity cluster, and migrate one UDF to python for databricks unity cluster.targetAllow users to create observation table with just a target id, but no graph.serviceSupport latest aggregation for vector columnsserviceUpdate repeated columns validation logic to handle excluded columns.endpointsEnable observation table to associate with multiple use cases from endpointstargetDerive window for lookup targets as wellserviceAdd critical data info validation logicapiImplement remove observation table from contextserviceSupport rename of context, use case, observation table and historical feature tabletarget_tablePersist primary entity IDs for the target observation tableobservation_tableUpdate observation table creation check to make sure primary entity is setserviceImplement service to materialize features to be published to external feature storeserviceAdd feature definition hash to new feature model to allow duplicated features to be detectedobservation_tableTrack uploaded file name when creating an observation table from an uploaded file.observation_tableAdd way to update purpose for observation table.testsUse published featurebyte library in notebook tests.serviceReduce complexity of describe query to avoid memory issue during query compilationsessionUse DBFS for Databricks session storage to simplify setuptarget_namespaceAdd support for target namespace deletionobservation_tableadd minimum interval between entities to observation tableapiImplement delete observation table from use caseapiImplement removal of default preview and eda table for contextapiEnable observation table to associate with multiple use cases from apiapiImplement removal of default preview and eda table for use case
🐛 Bug Fixes¶
observation_tablefix validation around primary entity IDs when creating observation tablesworkerUse cpu worker for feature job setting analysis to avoid blocking io worker async loopsessionMake data warehouse session creation asynchronous with a timeout to avoid blocking the asyncio main thread. This prevents the API service from being unresponsive when certain compute clusters takes a long time to start up.serviceFix observation table sampling so that it is always uniform over the inputworkerFix feature job setting analysis fails for databricks feature storesessionFix spark session failing with spark version >= 3.4.1serviceFix observation table file upload errortargetSupport value_column=None for count in forward_aggregate/target operations.serviceFix division by zero error when calling describe on empty viewsworkerFix bug where feature job setting analysis backtest fails when the analysis is missing an optional histogramserviceFixes a view join issue that causes the generated feature not savable due to graph inconsistency.use_caseAllow use cases to be created with descriptive only targetsserviceFixes an error when rendering FeatureJobStatusResult in notebooks when matplotlib package is not available.featureFix feature saving bug when the feature contains timestamp filtering
v0.6.1 (2023-11-22)¶
🐛 Bug Fixes¶
apifixed async task return code
v0.6.0 (2023-10-10)¶
🛑 Breaking Changes¶
observation_tableValidate that entities are present when creating an observation table.
💡 Enhancements¶
targetUse window from target namespace instead of the target version.serviceUseCase creation to accept TargetNameSpace id as a parameterhistorical_feature_tableMake FeatureClusters optional when creating historical feature table from UI.serviceMove online serving code template generation to the online serving servicemodelHandle old Context records with entity_ids attribute in the databaseserviceAdd key_with_highest_value() and key_with_lowest_value() for cross aggregatesapiAdd consistent table feature job settings validation during feature creation.apiChange Context Entity attribute's name to Primary EntityapiUse primary entity parameter in Target and Context creationserviceAdd last_updated_at in FeatureModel to indicate when feature value is last updatedapiRevise feature list create new version to avoid throwing error when the feature list is the same as the previous versionserviceSupport rprefix parameter in View's join methodobservation_tableAdd an optional purpose to observation table when creating a new observation table.docsDocumentation for Context and UseCaseobservation_tableTrack earliest point in time, and unique entity col counts as part of metadata.serviceSupport extracting value counts and customised statistics in PreviewServiceapiRemove direct observation table reference from UseCasewarehouseimprove data warehouse asset validationapiUse EntityBriefInfoList for entity info for both UseCase and ContextapiAdd trigo functions to series.apiInclude observation table operation into Context API Objectobservation_tableAdd route to allow users to upload CSV files to create observation tables.targetTag entity_ids when creating an observation table from a target.api-clientimprove api-client retryserviceEntity Validation for Context, Target and UseCaseserviceAdd Context Info method into both Context API Object and RouteapiAdd functionality to calculate haversine distance.serviceFix PreviewService describe() method when stats_names are provided
🐛 Bug Fixes¶
serviceValidate non-existent Target and Context when creating Use CasesessionFix execute query failing when variant columns contain null valuesserviceValidate null target_id when adding obs table to use caseserviceFix maximum recursion depth exceeded error in complex queriesserviceFix race condition when accessing cached values in ApiObject's get_by_id()hivefix hive connection error when spark_catalog is not the defaultapiTarget#list should include items in target namespace.targetFix target definition SDK code generation by skipping project.serviceFix join validation logic to account for rprefix
v0.5.1 (2023-09-08)¶
💡 Enhancements¶
serviceOptimize feature readiness service update runtime.
🐛 Bug Fixes¶
packagingRestore cryptography package dependency [DEV-2233]
v0.5.0 (2023-09-06)¶
🛑 Breaking Changes¶
ConfigurationsConfigurations::use_profile() function is now a method rather than a classmethod
💡 Enhancements¶
serviceCache view created from query in Spark for better performancevector-aggregationAdd java UDAFs for sum and max for use in spark.vector-operationsAdd cosine_similarity to compare two vector columns.vector-aggregationAdd integration test to test end to end for VECTOR_AGGREGATE_MAX.vector-aggregationsEnable vector aggregations for tiling aggregate - max and sum - functionsmiddlewareOrganize exceptions to reduce verbosity in middlewareapiAdd support for updating description of table columns in the python APIvector-aggregationUpdate groupby logic for non tile based aggregatesapiImplement API object for Use Case componentapiUse Context name instead of Context id for the API signatureapiImplement API object for Contextvector_aggregationAdd UDTF for max, sum and avg for snowflake.apiIntegrate Context API object for UseCasevector-aggregationSnowflake return values for vector aggregations should be a list now, instead of a string.vector-aggregationAdd java UDAFs for average for use in spark.vector_aggregationOnly return one row in table vector aggregate function per partitionserviceSupport conditionally updating a feature using a mask derived from other feature(s)vector-aggregationAdd guardrails to prevent array aggregations if agg func is not max or avg.serviceTag semantics for all special columns during table creationapiImplement UseCase InfoserviceChange join type to inner when joining event and item tablesvector-aggregationRegister vector aggregate max, and update parent dtype inference logic.serviceImplement scheduled task to clean up stale versions and drop online store tables when possibleuse-caseImplement guardrail for use case's observation table not to be deletedvector-aggregationsEnable vector aggregations for tiling aggregate avg functionapiRename description update functions for versioned assetsvector-aggregationSupport integer values in vectors; add support integration test for simple aggregatesvector-aggregationUpdate groupby_helper to take in parent_dtype.httpClientadded a ssl_verify value in Configurations to allow disabling of ssl certificate verificationonline-servingSplit online store compute and insert query to minimize table lockingtestsUse the notebook as the test id in the notebook tests.vector-aggregationAdd simple average spark udaf.vector-aggregationAdd average snowflake udtf.apiAssociate Deployment with UseCaseserviceSkip creating a data warehouse session when online disabling a featureuse-caseimplement use case model and its associated routesserviceApply event timestamp filter on EventTable directly in scheduled tile jobs when possible
🐛 Bug Fixes¶
workerBlock running multiple concurrent deployment create/update tasks for the same deploymentserviceFix bug where feature job starts running while the feature is still being enableddependenciesupgradingscipydependencyserviceFixes an invalid identifier error in sql when feature involves a mix of filtered and non-filtered versions of the same view.workerFixes a bug where scheduler does not work with certain mongodb uris.online-servingFix incompatible column types when inserting to online store tablesserviceFix feature saving error due to tile generation bugserviceEnsure row ordering of online serving output DataFrame matches input request datadependenciesLimiting python range to 3.8>=,<3.12 due to scipy constraintserviceUse execute_query_long_running when inserting to online store tables to fix timeout errorsmodelMongodb index on periodic task name conflicts with scheduler engineserviceFix conversion of date type to double in spark
v0.4.4 (2023-08-29)¶
🐛 Bug Fixes¶
apiFix logic for determining timezone offset column in datetime accessorserviceFix SDK code generation for conditional assignment when the assign value is a seriesserviceFix invalid identifier error for complex features with both item and window aggregates
💡 Enhancements¶
profileAllow creating of profile directly with fb.register_profile(name, url, token)
v0.4.3 (2023-08-21)¶
🐛 Bug Fixes¶
serviceFix feature materialization error due to ambiguous internal column namesserviceFix error when generating info for features in some edge casesapiFix item table default job settings not synchronized when job settings are updated in the event table, fix historical feature table listing failure
v0.4.2 (2023-08-07)¶
🛑 Breaking Changes¶
targetUpdate compute_target to return observation table instead of target table will make it easier to use with compute historical featurestargetUpdate target info to return a TableBriefInfoList instead of a custom struct this will help keep it consistent with feature, and also fix a bug in info where we wrongly assumed there was only one input table.
💡 Enhancements¶
targetAdd as_target to SDK, and add node to graph when it is calledtargetAdd fill_value and skip_fill_na to forward_aggregate, and update nametargetCreate lookup target graph nodeserviceSpeed up operation structure extraction by caching the result of _extract() in BaseGraphExtractor
🐛 Bug Fixes¶
apiFix api objects listing failure in some notebooks environmentsutilsFix is_notebook check to support Google Colab [https://github.com/featurebyte/featurebyte/issues/1598]
v0.4.1 (2023-07-25)¶
🛑 Breaking Changes¶
online-servingUpdate online store table schema to use long table formatdependenciesLimiting python version from >=3.8,<4.0 to >=3.8,<3.13 due to scipy version constraint
💡 Enhancements¶
generic-functionadd user-defined-function supporttargetadd basic API object for Target Initialize the basic API object for Target.feature-groupupdate the feature group save operation to use/feature/batchrouteserviceUpdate describe query to be compatible with Spark 3.2serviceEnsure FeatureModel's derived attributes are derived from pruned graphtargetadd basic info for Target Adds some basic information about Target's. Additional information that contains more details about the actual data will be added in a follow-up.list_versionsupdate Feature's & FeatureList'slist_versionsmethod by addingis_defaultto the dataframe outputserviceMove TILE_JOB_MONITOR table from data warehouse to persistentserviceAvoid using SHOW COLUMNS to support Spark 3.2tableskip calling data warehouse for table metadata during table constructiontargetadd ForwardAggregate node to graph for ForwardAggregate Implement ForwardAggregator - only adds node to graph. Node is still a no-op.serviceAdd option to disable audit logging for internal documentsquery-graphoptimize query graph pruning computation by combining multiple pruning tasks into onetargetadd input data and metadata for targets Add more information about target metadata.targetAdd primary_entity property to Target API object.serviceRefactor FeatureManager and TileManager as servicestestsMove tutorial notebooks into the FeatureByte reposerviceReplace ONLINE_STORE_MAPPING data warehouse table by OnlineStoreComputeQueryServicefeatureblock feature readiness & feature list status transition from DRAFT to DEPRECATEDtask_managerrefactor task manager to take celery object as a parameter, and refactor task executor to import tasks explicitlyfeaturefix bug with feature_list_ids not being updated after a feature list is deletedserviceReplace TILE_FEATURE_MAPPING table in the data warehouse with mongo persistenttargetperform SQL generation for forward aggregate nodefeaturefix primary entity identification bug for time aggregation over item aggregation featuresfeaturelimit manual default feature version selection to only the versions with highest readiness levelfeature-listrevise feature list saving to reduce api callsserviceRefactor tile task to use dependency injectionserviceFix error when disabling features created before OnlineStoreComputeQueryService is introduceddeploymentSkip redundant updates of ONLINE_STORE_MAPPING tablestatic-source-tablesupport materialization of static source table from source table or viewcatalogCreate target_table API object Remove default catalog, require explicit activation of catalog before catalog operations.feature-listupdate feature list to preserve feature ordertargetAdd gates to prevent target from setting item to non-target series.targetAdd TargetNamespace#create This will allow us to register spec-like versions of a Target, that don't have a recipe attached.deploymentReduce unnecessary backfill computation when deploying featuresserviceRefactor TileScheduler as a servicetargetstub out target namespace schema and modelsserviceAdd traceback to tile job log for troubleshootingtargetadd end-to-end integration test for target, and include preview endpoint in targetfeatureupdate feature & feature list save operation to use POST/feature/batchrouteserviceDisable tile monitoring by defaultserviceFix listing of databases and schemas in Spark 3.2targetRefactor compute_target and compute_historical_featurefeatureoptimize time to deserialize feature modelentity-relationshipremove POST /relationship_info, POST /entity/parent and DELETE /entity/parent/endpoints serviceSupport description update and retrieval for all saved objectsconfigAdd default_profile in config to allow for a default profile to be set, and require a profile to be set if default_profile is not settargetCreate target_table API object Create the TargetTable API object, and stub out the compute_target endpoint.targetAdd datetime and string accessors into the Target API object.serviceFix unnecessary usage of SQL functions incompatible with Spark 3.2 (ILIKE and DATEADD)previewImprove efficiency of feature and feature list preview by reducing unnecessary tile computationserviceFix DATEADD undefined function error in Spark 3.2 and re-enable testsserviceImplement TileRegistryService to track tile information in mongo persistentspark-sessionadd kerberos authentication and webhdfs support for Spark sessionserviceFix compatibility of string contains operation with Spark 3.2targetadd CRUD API endpoints for Target First portion of the work to include the Target API object.targetFully implement compute_target to materialize a dataframeserviceRefactor info service by splitting out logic to their respective services. Most of the info service logic was not being reused. It also feels cleaner for each service to be responsible for its own info logic. This way, dependencies are clearer. We also refactor service initialization such that we consistently use the dependency injection pattern.online-servingUse INSERT operation to update online store tables to address concurrency issuestargetcreate target namespace when we create a targetserviceFix more datetime transform compatibility issues in Spark 3.2storageAdd support for using s3 as storage for featurebyte servicetargetCreate target_table services, routes, models and schema This will help us support materializing target tables in the warehouse.
⚠️ Deprecations¶
targetremove blind_spot from target models as it is not used
🐛 Bug Fixes¶
workerfixed cpu threading modelserviceFix feature definition for isin() operationonline-servingFix the job_schedule_ts_str parameter when updating online store tables in scheduled tile tasksgh-actionsAdd missing build dependencies for kerberos support.feature_readinessfix feature readiness bug due to readiness is treated as string when finding default feature IDtransformsUpdate get_relative_frequency to return 0 when there is no matching labelserviceFix OnlineStoreComputeQuery prematurely deleted when still in use by other featuresdata-warehouseFix metadata schema update for Spark and Databricks and bump working versionserviceFix TABLESAMPLE syntax error in Spark for very small sample percentagefeaturefix view join operation bug which causes improper query graph pruningserviceFix a bug in add_feature() where entity_id was incorrectly attached to the derived column
v0.4.0 yanked (2023-07-25)¶
v0.3.1 (2023-06-08)¶
🐛 Bug Fixes¶
websocketmake websocket client more resilient connection lostwebsocketfix client failure when starting secure websocket connection
v0.3.0 (2023-06-05)¶
💡 Enhancements¶
guardrailsadd guardrail to make sure*Tablecreation does not contain shared column name in different parametersfeature-listadddefault_feature_fractionto feature list objectdatasourcecheck if database/schema exists when listing schemas/tables in a datasourceerror-handlingimprove error handling and messaging for Docker exceptionsfeature-listRefactorcompute_historical_features()to use the materialized table workflowworkflowsUpdate daily cron, dependencies and lint workflows to use code defined github workflows.featurerefactor feature object to remove unused entity_identifiers, protected_columns & inherited_columns propertiesschedulerimplement soft time limit for io tasks using gevent celery worker poollist_versions()addis_defaultcolumn to feature's & feature list'slist_versionsobject method output DataFramefeaturerefactor feature class to dropFrozenFeatureModelinheritancestoragesupport GCS storage for Spark and DataBricks sessionsvariablesexposecatalog_idproperty in the Entity and Relationship API objectshistorical-featuresCompute historical features in batches of columnsview-objectaddcolumn_cleaning_operationsto view objectloggingsupport overriding default log level using environment variableLOG_LEVELlist_versions()removefeature_list_namespace_idandnum_featurefromfeature_list.list_versions()feature-api-routeremoveentity_idsfrom feature creation route payloadhistorical-featuresImprove tile cache performance by reducing unnecessary recalculation of tiles for historical requestsworkersupportscheduler,worker:io,worker:cpuin startup command to start different servicesfeature-listadddefault_feature_list_idtofeature_list.info()outputfeatureremovefeature_namespace_id(feature_list_namespace_id) from feature (feature list) creation payloaddocsautomatically createdebugfolder if it doesn't exist when running docsfeature-listaddprimary_entitiesto feature list'slist()method output DataFramefeatureadd POST/feature/batchendpoint to support batch feature creationtable-columnaddcleaning_operationsto table column object & view column objectworkflowsUpdate workflows to use code defined github workflows.feature-sessionSupport Azure blob storage for Spark and DataBricks sessionsfeatureupdate feature's & feature list's version format from dictionary to stringfeature-listrefactor feature list class to dropFrozenFeatureListModelinheritancedisplayimplement HTML representation for API objects.info()resultfeatureremovedtypefrom feature creation route payloadaggregate-asatSupport cross aggregation option for aggregate_asat.databrickssupport streamed records fetching for DataBricks sessionfeature-definitionupdate feature definition by explicitly specifyingonparameter injoinoperationsource-table-listingExclude tables with names that has a "__" prefix in source table listing
⚠️ Deprecations¶
middlewareremoved TelemetryMiddlewarefeature-definitionremove unused statement fromfeature.definitionFeatureJobSettingAnalysisremoveanalysis_parametersfromFeatureJobSettingAnalysis.info()result
🐛 Bug Fixes¶
relationshipfixed bug that was causing an error when retrieving aRelationshipwith noupdated_bysetdependenciesupdatedrequestspackage due to vulnmongodbmongodb logs to be shipped to stderr to reduce disk usagedeploymentfix multiple deployments sharing the same feature list bugdependenciesupdatedpymdown-extensionsdue to vulnCVE-2023-32309dependenciesfixed vulnerability in starletteapi-clientAPI client should not handle 30x redirects as these can result in unexpected behaviormongodbupdateget_persistent()by removing global persistent object (which is not thread safe)feature-definitionfixed bug infeature.definitionso that it is consistent with the underlying query graph
v0.2.2 (2023-05-10)¶
💡 Enhancements¶
- Update healthcare demo dataset to include timezone columns
🐛 Bug Fixes¶
- Drop a materialized table only if it exists when cleaning up on error
- Added
dependenciesworkflow to repo to check for dependency changes in PRs - Fixed taskfile
javatasks to properly cache the downloaded jar files.
v0.2.1 (2023-05-10)¶
🐛 Bug Fixes¶
- Removed additional dependencies specified in featurebyte client
v0.2.0 (2023-05-08)¶
🛑 Breaking changes¶
featurebyteis now available for early access