Blockchain technology is a hot topic nowadays, especially with the recent boom in decentralised finance, the exponential growth of Bitcoin and other cryptocurrencies, and the ongoing NFT craze. From a Data Scientist’s perspective, blockchains are also an exciting source of high-quality data that can be used to tackle a wide range of interesting problems using Statistics and Machine Learning. But what are those problems exactly and is there enough demand for Data Scientists in the blockchain industry to build a career? …
Blockchain transactions are records of interactions between two or more addresses. On the TRON blockchain, there are typically two interacting addresses and the interactions between them can take many different forms (e.g., creation of new accounts or assets, triggering of smart contracts, transfer of assets, etc.). Each transaction can be uniquely recognised by its hash ID that contains 64 alphanumeric characters.
Getting information on a transaction or set of transactions is at the core of blockchain data analytics. This article demonstrates how such information can be collected using the R package
tronr, a toolbox to explore the TRON network.
Just like with any other blockchain, transactions on the TRON network are organised into blocks. Blocks are logical units that can be thought of as pages in a ledger. Each block has an ID and a timestamp and contains information on transactions that took place within a certain period of time. On the TRON blockchain, blocks cover intervals of 3 seconds.
The account balance is one of the most interesting and important types of data that one can collect from a blockchain. This type of data enables various analytical applications, such as understanding how the funds are distributed on the network, which assets are held by the accounts of interest, how dynamic the balances are, etc.
This article illustrates how my recently released R package
tronr can be used to query account balances on the TRON blockchain (the term account refers here to both wallet-like accounts and smart contracts). …
Tronix (TRX, a.k.a. Tron) is the native currency of the TRON blockchain. The TRX token is based on the ERC-20 Etherium Standard and is fully compatible with it. Although the original purpose of TRX was to enable payments for digital entertainment, nowadays it has gained many other use cases that power transactions on the TRON blockchain and build up its economy (mainly in the gaming and decentralised finance sectors).
Founded back in 2017 by Justin Sun and the TRON Foundation, today TRON is one of the most popular blockchain projects out there. Its growing popularity is mainly driven by its speed and low transaction costs, allowing for the development of robust decentralised applications (dApps), especially online games. As a result, the project has recently gained much traction and several high-profile partnerships, including joint ventures with the likes of BitTorrent, Samsung, Huawei, Opera, etc. In February 2021, the market capitalisation of Tronix (TRX), the native currency of TRON, exceeded $4.2B.
I started my career as an academic researcher in the areas of Ecology and Invasion Biology. That involved lots of fieldwork and lab experiments, followed by statistical analyses of the collected data and publishing in peer-reviewed journals. In addition, I taught several data analysis-heavy courses to university undergrads and Master students, including Biostatistics, Population Ecology, and Ecological Modeling.
All of that experience became extremely useful when I decided to leave academia and apply my data analysis skills to solving business problems. Believe it or not, my decision at the time was triggered by the famous article in Harvard Business Review…
The majority of Data Science projects fail. I will not even provide any references in support of this statement — the Internet is full of examples. The reasons for the high failure rate are many and varied. However, as surprising as this may sound, one of the main reasons is the lack of clearly defined project goal(s) and the associated requirements.
Problem understanding and requirements gathering make up an initial phase in pretty much any project management framework, including the widely used “Cross-Industry Standard Process for Data Mining” (CRISP-DM). This implies that the project goals and requirements are already there…
Open any large job board and search for “Data Scientist” positions. Many of the returned job specs will contain a requirement to produce data-driven insights that can be used to optimise business processes or products of the hiring organisation. In this context, “insight” is defined as a novel piece of useful information that has been extracted from data using Statistics or Machine Learning techniques. Here are just a few excerpts from the job ads found on LinkedIn:
You are a Data Scientist working for a commercial company. You spent weeks, or maybe even months, developing this deep learning-based model that accurately predicts an outcome of great interest to your business. You proudly presented the results to your stakeholders. Quite annoyingly, though, they did not pay much attention to that cutting-edge approach you used to build the model. Instead of focusing on how powerful the model was, they started asking lots of questions on why some of its predictions looked the way they did. Your colleagues also felt that some of the critical predictors were missing. They could…
Data Science consultant with multiple years of experience across academic and industrial sectors. Author of several books on data analysis and visualisation.