In simple terms, Upshot’s ML learns a mapping between an NFT’s traits, previous sales for similar assets, and the current value of an NFT. We define the “current value” to be an estimate of the sale price for an NFT that would be observed it were to be sold on the open market at that moment. Here, we break down the process by which raw NFT data becomes an appraisal.
- Multi-collection data assembly: Sufficient data aggregation is the first step in our appraisal process. We gather the most up-to-date sales, bids, asks, asset data, traits and collection data for thousands of NFT collections. We also work closely with external data providers such as Reservoir, SimpleHash, and others to further ensure the coverage and quality of data we receive.
- Feature extraction and data cleansing: On its own, the raw data isn’t so useful. Hence, we must process this data to extract the bits of information most likely to influence NFT sale prices. Our ideas and intuitions purposely resemble that of seasoned traders or collectors; we find patterns in recent market activity, rarity, and trait value (discussed below). Some extra considerations at this stage:
- We have dedicated capabilities to remove the impact of outliers caused by off-chain transactions and wash trading.
- A key component in ensuring strong appraisal performance is in taking appropriate measures to ensure our features are robust to currency movements.
This process results in a set of features that efficiently summarizes the state of all variables that can significantly impact the sale price of an NFT.
- ML Training: We use gradient-boosted tree ensembles, which are a family of particularly fast ML algorithms that handle numerical and categorical features with ease and are well suited to the type of data that describes NFT attributes. The algorithm is iterative and attempts to repair the biggest mistakes of the previous step until model accuracy is no longer improved.
- Prediction: After model training and a lot of testing, the model is ready to be used. As new sales and events arrive, we refresh our features, repeating steps 1 and 2. This refreshed data is passed through the model, which returns fresh appraisals that are immediately posted to our API and front end.
At any given time and collection, most NFTs are not on sale and many may have never even sold previously. The challenge for using ML to generate appraisals is that the training data skews heavily towards assets that have a higher likelihood of selling, and are therefore more likely to sell again. However, we wish to appraise the wider universe of NFTs which contains many assets that have never been sold and may never be put up for sale.
NFT collections are complex and usually come with a set of trait values, which describe specific aspects of their design, look or utility. As an example, consider the CryptoDickButt below:
Most CryptoDickbutts have 12 trait types each of which have a single trait value. For example, this Dickbutt’s ‘Hat’ trait type has the trait value ‘Jester’. In any collection, the relationship between trait composition and asset value is complex and often idiosyncratic.
A single trait may be the determining factor for an asset’s value relative to the rest of the collection - like gold fur in Bored Ape Yacht Club. Other times, a combination of slightly less-valuable traits results in a highly-prized asset. Sometimes, having very few traits results in a high value, such as this immaculate CryptoDickbutt #324:
Traits are a key factor in creating good appraisal models. At Upshot, we believe much of our success lies in how well we can process trait data to extract important information that is informative about the asset’s overall value.
With traits, the traits explicitly defined in the metadata are only part of the useful/ influential data important for accurate appraisals. Various aggregations of traits can be considered to be traits themselves, and quite impactful ones too, though they are not explicitly defined in the metadata. We call these "metatraits".
For example, a metatrait might be the number of traits an NFT has (which can signal how "clean" or "sloppy" the asset appears). Another metatrait might be an abstraction of already-defined traits e.g. if a collection exhibits multiple types of hoodie traits ("green hoodie", "blue hoodie"), we may create a common "hoodie" metatrait to better identify correlations between otherwise disparate assets; perhaps having a "hoodie" at all drives price fluctuations, not its color.
We constantly update our list of metatraits to make our models that much more insightful, revealing to them determinants of value not explicitly defined in any asset's metadata.
As most collectors know, the more unusual or rare an item is, the more likely it is to trade for a higher price. Rarity usually refers to the rarity of an asset within a collection (as opposed to across collections).
Historical trait value
Rarity is implicitly a price proxy: rarer assets have traits with lower supply and should fetch higher prices. We can also estimate trait value directly, by calculating the average sale price per trait and using similar statistics as for asset rarity to infer the relative value of assets based on the historic values of its trait composition.
This feature often closely resembles rarity, but can do a much better job of discriminating between sets of traits that may be equally rare but have different realized values in terms of how they trade.
Trailing prices by bucket
Following the previous two sections we end up with a variety of rarity and value based features. These each enter the model directly, but we’re not quite done.
We want a model that can adapt to changes in the market and a way to do that is to create features that track recent average prices. We can do this at collection level, but we can also break up our rarity and value metrics into buckets and track prices within those (for example, because a group of rarer assets will probably have a higher recent average price than less rare things).
Updated 7 months ago