News from the Tykli team


Nasce Tykli Marshal, il primo “Personal Big Data assistant” che aiuta le aziende a dialogare con i dati ed estrarne valore


Tykli, la piattaforma di Advanced Analytics che ha l’ambizione di rivoluzionare il mondo dei Big Data, presenta Tykli Marshal, la soluzione che personalizza l’interazione e il dialogo tra l’azienda e i propri dati.


Poter metter in evidenza il significato e il contesto operativo dei propri dati rendendoli immediatamente disponibili è la sfida di tutte le aziende che vogliono trarre un vantaggio competitivo dal proprio asset informativo; Tykli Marshal trasforma l’interrogazione dei dati in dialogo guidato, mettendo a disposizione funzionalità di analisi e reporting intuitive e estremamente potenti, per una esperienza di accesso ai contenuti radicalmente nuova.


Tykli Marshal è basato su Tykli Edge: la piattaforma di big data analysis in cloud di Tykli che implementa algoritmi proprietari di text analysis e complex network analysis offre le basi tecnologiche scalabili per svariate applicazioni di advanced analytics (es. IoT, customer profiling, predictive analytics, ecc.).


Grazie a Tykli Edge, Tykli Marshal è in grado di elaborare contemporaneamente contenuti web, news, social, media, documenti e log di eventi e di ricostruire in modo automatico, senza dizionari predefiniti, l’intero contesto semantico riordinando i dati in modo da abilitare nuove possibilità di analisi. Attraverso un cruscotto semplice, intuitivo ed adattabile, Tykli Marshal abilita modalità di esplorazione e comprensione dei dati che generano quindi conoscenza e valore.

Da qui è possibile “dialogare” con i contenuti ed interagire con l’intero patrimonio informativo in modo diretto e agile, aiutati da indicatori e metriche dinamiche.


Inoltre, grazie alle funzionalità di Machine e Deep Learning, la piattaforma impara dall’interazione con l’utente, restituendo risultati sempre più pertinenti. Il significato semantico del dato si affina e si arricchisce mentre la piattaforma viene usata, generando un circolo virtuoso tra esperienza e conoscenza.


“Lo sviluppo di Tykli Marshal risponde a una esigenza precisa che abbiamo raccolto dal mercato e dai nostri interlocutori, ovvero quella di permettere di interagire con i big data in maniera semplice, produttiva e allo stesso tempo flessibile. Tykli Marshal può essere inteso come un ‘personal big data assistant’ virtuale che legge i dati per noi, li interpreta, li presenta in modo efficace guidandoci alla scoperta di nuova conoscenza.” ha dichiarato Lorenzo Verna, tra i fondatori di Tykli e Amministratore Delegato della società.


Tykli Marshal è utilizzato principalmente nelle aree marketing e operation di medie e grandi aziende che hanno compreso come il loro successo dipenda in gran parte dall’interazione e interpretazione rapida delle informazioni e dei dati che li circondano; gli ambiti nei quali Tykli Marshal trova maggiore applicazione sono l’analisi avanzata dei canali social, predictive analytics, marketing e competitive intelligence.

As previously written, humanities and computer science aren't separate worlds. In this context, one of our challenges is the application of our holistic approach to the analysis of the Montelupo Museo Archivio Bibilioteca (MMAB)
This is a cultural institution located in Montelupo, a small town near Florence, in Tuscany. Since May 2014 the Museum of Ceramics (opened in 2008) as well as the library and the town's archive are all located in the same building and work as a single organization. 
Physical and virtual space 
The MMAB can be viewed as being  composed of two connected spaces, one physical and another virtual: our aim is to study how users behave in both areas. For instance, they move from one room of the building to another and consult a book, probably applying a specific pattern of action. Similarly, on the virtual side, people look through the online catalog searching for a specific item or surf the website of one of the three institutions looking for the opening hours. 
Heterogeneous data 
To reach this goal, we analyze different types of data from each institution, such as: 
  • the catalog's items; 
  • the contents related to these items, such as digital texts, audio and video files;
  • information about how users behave in the physical space; 
  • login and  websites searches as well as the online catalogs; 
  • behaviors and comments collected via social networks about the MMAB.
Analyzing this set of data  enables us to understand how users interact with and from within the MMAB and then allows us  to outline the profile of the institution and its social impact. 

But how exactly?  
Representing all information through a graph, we can see that elements such as artwork, books, authors, concepts, dates and users' comments are all part of the same system. Specifically, they are the nodes of a network that represent the MMAB and they are connected to each other. These relationships, of course, can differ in number and importance. Our goal is to identify which connections exist among those elements and mainly which are the most central as well as to discover hidden links and patterns among the nodes. 
This kind of interpretation allows us to understand how the MMAB works and which effects have been originated by the union of three different cultural organizations with their own stories and knowledge heritage. In addition, it would prove beneficial to improve access to the collections as well as improving a more effective use of both space and contents.

Finding a model 
Last of all, it is important to consider that this analysis is just an example of a model based on network science analysis that could be successfully applied to study every kind of space and to understand all aspects that contribute to define its own identity and their possible uses.

If you want to know more, download the paper about the MMAB case we submitted at the International Federation of Library Associations and Institutions conference in August 2014.  


We are complex systems 
Complex systems are a core issue of our daily work. When you think about complexity, probably your first thought is that it may be something difficult and not at all in relative proximity to what it is your experience on a daily basis. But the truth is that we are both expressions of it as well as a part of it. With its organs, cells and  biological processes our body allows us to live, which is an example of a complex system as are social and business systems.  And a lot of infrastructures we build, such as the Internet and the highways, too.  

 Unpredictable behaviours 
All of them are composed of a high number of single and simpler parts, that interact among themselves. The feature that marks the units' conduct is that these interactions and their results aren't totally predictable: consequently, nobody knows exactly how the system will change.  For instance, our society is a complex system: marketing experts and political campaigners know how hard it can be to persuade  people to buy a product or to vote for a candidate and in spite of the effort put forward by these people the exact opposite could happen instead.  In this context, the network science theory is one of the most powerful tools in analyzing  these structures: this formalism identifies each part of the system as a node and the relationships as edges, giving a model that allows the user to understand the system and its properties.   

 Complexity & potential
But why would it matter to be able to grasp complex systems? We could say it's because complexity goes with potential: indeed, they are a cross-disciplinary key to the reading of a phenomenon and enables us to interpret its behavior better. There is a clear difference between studying a company analyzing only what it produces, how it makes it and how much it spends, and instead relating these processes to the fact that the firm operates in a system made up of banks, clients, suppliers and competitors. The outcome of this approach is surely  richer and can highlight  previously unexpected results.   

 The DebtRank study  
To give  another example, it is interesting to talk about DebtRank, a study that applied a methodology regarding the field of complex networks, which has given a different insight on  the risk of default of the financial system. This application, created amongst others by Stefano Battiston, shows that the FED loans program launched in the USA in 2008-2010 to help manage the crises went mostly to 22 banks alone and that they received the biggest slice of the 1.2 trillion allocation. According to the findings, those institutions were not only “too big to fail”, but mostly were “too central to fail” because they were all apart of the same hub inside the US financial system. The strength of their mutual connections was so high that even a small loss could affect other nodes causing the infamous “systemic default”.  
 Tykli's holistic approach
We often need to explain a lot of similar issues: thus, our challenge is to make complex systems more accessible, regardless of whether or not we are talking about media or library archives, human resources or the database of an e-commerce. The holistic approach of our technology, based on algorithms of complex network analysis, makes detecting the properties of any system we investigate simple and immediate.   

On one hand a lively environment of startups active in the field of big data analytics and business solutions. On the other hand, companies that have still to understand entirely the potential of these applications. This is in brief the outlook of  2014 edition of Osservatorio Big Data Analytics & Business Intelligence, presented on 10th December 2014 in Milan, during the event “Big Data: un mercato in cerca d'autore”
According to the survey conducted by the Politecnico di Milano's School of Management,  there are 376 young enterprises (14 based in Italy) that work in this area around the world, that raised almost 7.6 billion dollars during the last three years.  They offer different kind of solutions, such as customer analytics (32%), enabling technologies for big data (28%), and social and web analytics tools (17%). 
Though the study shows that the Italian market for these technologies has grown by 25% since last year, it also confirms that we are still in a phase driven by expectations of future use. At present enterprises seems to be indeed attracted by the potential of them but not aware of the full range of possible uses.
They mainly ask for analysis of internal data (84% in Italy versus 70% of European businesses) and privilege passive analysis tools such as Performance management & Basic Analytics systems (78%), while Advanced Analytics applications that are able to support complex decision-making processes represent only the 22% of chosen solutions. In addition, 83% of data is structured (meaning they are already organized in specific databases that make them easier to investigate) and less than half of all  available data is actually analysed by this software.

In this context, Italy shows a number of startups that provide innovative solutions to manage big data. Click here to watch the panel dedicated to explore the national landscape, while below you can see the speech of Tykli's CEO Lorenzo Verna about our experience...

… and have a glance at our technology and some of our prospects for the future: 

If you want to know more, you can also sign up to to watch the video of the whole event as well as to read the proceedings. 


 Everyday our goal is to help our customers handle big amounts of data that are fundamental to their business, but can be frustrating and costly without the right tools.
As we want to improve our ability to support them, we are pleased to announce that Orizzonte Sgr through its Fondo ICT, the first Italian Private Equity fund specialized in Information & Communication Technology, has become our new partner with a minority stake.

“This is a new stage for Tykli," says Lorenzo Verna, co-founder and CEO of Tykli. "Having an important institutional and technical partner such as Orizzonte Sgr will give us not only a financial contribution to our development, but means the beginning of a partnership that allows us to face the market's challenges with a competitive sales plan, offering to our customers a cutting-edge technology that is constantly up-to-date.” 

What is the strategic value of big data? What are the advantages for companies that use big data analytics and business intelligence solutions? What is the role of data scientist and what skills he should have? 
These are a few of the questions that will be answered at Big Data: un mercato in cerca d'autore, the conference that will be held tomorrow in Milan, organized by the School of Management of Politecnico. 

by Camelia.boban
The Osservatorio di Big Data Analytics & Business Intelligence will present its 2014 survey that examines diffusion, benefits and successful case histories about the use of this kind of applications for business. 
From 2 to 3 pm Lorenzo Verna, Tykli co-founder, will discuss the role that startups are playing in this sector. 

Moreover, to get a better insight about services and solutions in this field the conference has scheduled also several panels that deepen the opportunities for companies and show some innovative experiences about big data analytics carried out by entreprises such as American Express and EuropCar Italia. 
Click here for more details. 

The birth of a word

Nov 25, 2014
90,000 hours of video recording, 7 millions of words recorded and, as a result, a model based on big data analysis that explains how a baby learns to talk. 
This is the fascinating experiment that Deb Roy, a MIT researcher who studied how his first son began speaking, tells us about in his talk at TED
But he went even further: together with his collegues, he has applied this pattern to study the development of conversations on social media about a TV programme. Discovering that “it's like building a microscope or telescope and revealing new structures about our own behavior around communication”. 

 Accuracy, ability to predict possible troubles and real time warning: these are the keywords relating to the present manufacturing intelligence. In other words, a cutting edge tool that measures the key performance indicators should be able to identify precisely and quickly all patterns that characterize manufacturing output and show, if necessary, what goes wrong. 
When we talk about this type of analysis, until now we mainly think about a quantitative investigation which measures how things work in a production line, from energy consumptions to the productiveness of human resources. All these pieces of information – that represent, as usual in our field, a big amount of data to manage – give us a quite precise but static picture of a plant under the accounting and productive profile. But, more and more, it is not enough. 
Almost every industry has to handle data that comes from different sources and after understanding how productive process works, the aim is to know hic et nunc which factors and interactions among them affect production and how to face them effectively to improve the overall performance. So the management needs to:  not worry about the fact that data comes from various sources, from suppliers, that have their own systems, to different inner divisions; be aware that quantitative information is only a part of what it is important to know, whereas understanding relationships among all elements involved in production is the key point to take a step forward. 
In this sense our approach has the edge over because Tykli's technology enables the understanding of cause and effect relationships, even if their roots aren't immediately visible and easily correlated with each other. Indeed, the distinctive feature of our tool is predicting possible future events according to the results from the analisys of the production's data. So all top level managers can dispose of an on demand overview that shows strong and weak points of the productive processes. Moreover, this analysis boosts the managers' ability to optimize their choices about business strategies by depicting future scenarios that could concern production.  At the same time, our software works also at task force level as a real time alert which enables to cope immediately with any event that could occur along the assembly line. 
Do you want to know more? Contact us for all details. 


 The future of news could be called micromedia: contrary to the present overload of information, we will receive a flow of news strongly shaped according to our geographical location, job, relationship status, interests, contacts and even our mood at a certain moment.

It might sound a bit like science fiction, but actually some digital media outlets such as BuzzFeed or social networks are already using algorithms that produce in this way a data driven flow of information (not only news strictly speaking) based on our preferences.

Technologies able to support the birth of this new generation of news provider already exist and a key role is played by semantic web and linked data.

But a question rises considering this perspective: will it be tech or media companies to produce and deliver this flow?

Quoting Jeff Jervis, who has written extensively on journalism as a service, Paul Sparrow, senior vice president at the Newseum, says:

While technology companies have made huge strides in their effort to deliver targeted, customized ads meeting the interests of specific individuals, news/media companies must stop thinking of themselves as just content providers and fundamentally change their focus to become platforms providing critical services, as well as information.

To do it, Sparrow adds

[...]media companies must engage
in the same kind of information collection and data mining
that retailers and technology companies have been doing for years.

This change in perspective by the newsroom isn't predictable, but there are already some ongoing experiments to do so. For instance, inside the BBC News Labs, they are carrying out The News Juicer, a platform that makes it possible to associate all useful contents connected to a concept inside the broadcaster's archive thanks to linked data. Through a query, the user can link, compare and combine news and facts related to a specific topic, i.e. those about the city of Cambridge in UK but not Cambridge in Massachussets.

So, the UK European election in 2014 were reported with a 100 percent-linked-data coverage as well as the BBC is ready to do the same with General Election next year. Even new apps which will be released around Christmas will give users the chance to create thematic pages according to his own preferences.

These are probably only the first steps along the way that could bring us to the “micromedia age”. It remains to be seen who will win the challenge for realizing the most reliable but also engaging version of this service between tech and media companies. The latter, for sure, now have to close the gap.  

 What happens when computer science marries literature? They have a child called digital humanities.
Digital humanities, as explained by Meredith Martin, associate professor of English at Princeton University, “brings computational tools such as large-scale databases and text-analysis software to bear on traditional humanities scholarship.
Martin is the director of the new Center for digital humanities in Princeton, officially opened at the end of September, which aims to be a bridge among humanities, computer sciences and library sciences. Supporting faculty, graduate and undergraduate researches, it is a good example of the huge number of topics that can be included under the label  “digital humanities”. 
For instance, one project is the Princeton Prosody Archive, a full-text searchable database created in 2007 which collects more than 10,000 records, such as manuscripts, manuals, articles and grammar books, about prosody - the study of the metrical structure of verse - written between 1750 and 1923. But the Center also works on the “virtual archeologist” project, a software that helps to reconstruct frescoes of Akroti, on the Greek Island of Santorini, that were buried under volcanic ash 3500 years ago, simulating the traditional procedures followed at excavation sites. 
A forthcoming research will be about a complex dictionary of pre-modern Chinese texts, made of a database of characters and sounds plus a hypertext version of all these texts. This project has been promoted by the Department of East Asian Studies and it will allow scholars to see and explore all existing intertextual relations. 
"Because of the ability of computers to digest and store vast amounts of data” Martin adds, “things that would have taken scholars an entire career to research, computers can now do in a month. Instead of digging through archives trying to find the answer, in collaboration with a computer scientist, the humanist can come up with ways of using existing or new tools to generate a lot of answers very quickly. This, in turn, helps humanists pose new questions."   

 “You can't manage what you don't measure”: this statement by Douglas Laney, an analyst at technology research and consulting firm Gartner expresses a more and more frequent issue for companies that run big businesses collecting and selling data but don't know exactly how to estimate these intangible assets. 
As Wall Street Journal reports 
Corporate holdings of data and other “intangible assets,” such as patents, trademarks and copyrights, could be worth more than $8 trillion, according to Leonard Nakamura, an economist at the Federal Reserve Bank of Philadelphia. That’s roughly equivalent to the gross domestic product of Germany, France and Italy combined. 

But, as opposed to machinery or cash, there aren't common rules yet to record them.  The topic concerns not only tech companies like Google, Facebook or Apple, but for instance supermarket chains which collect data about customers' habits and sell it to multinationals interested in tailoring their own products and marketing. 
Until now the Financial Accounting Standards Board, an US organization which establishes and improves accounting principles, has failed its attempts to find shared criteria useful to define how big data is worth.  Although some experts claim that investors don’t need to know the specific value of intangible assets, but that it is more important to understand other factors like how companies use data to make money, it seems that accountants find hard to walk at the same rate of new forms of properties.  

Italia think forward!

Oct 20, 2014
 Wednesday 22 Octber Tykli will take part in Italia think forward! in Rome, the event that celebrates 35th anniversary of ING Direct Italy
The focus is on the role of digital innovation for the country's growth and there will be represented institutions and digital agenda, the world of education and training, private individuals who have financed innovative ideas. The agenda includes the partecipation of, amongst others, Alessandro Fusacchia, chief of staff at Ministry of Education, Alessandra Poggiani, general manager of Agenzia per l'Italia Digitale, and Massimiliano Magrini, co-founder and managing partner United Ventures together with Don Koch, Ceo of ING Bank Italy. 
Tykli will be there as the winner of 2013 edition of Prendi parte al cambiamento, the annual contest dedicated to innovative startups organized by ING Direct. 


Whoever seeks will find, says an old proverb. But it doesn't consider the time you need to do it and mainly how much the outcomes really fit your preferences. Nowadays we live in a digital environment where we leave daily evidence of our presence and preferences, commenting on social networks, “googling” information or checking-in into a specific place. This is not noise: on the contrary, it represents an extremely rich context where tools like recommendation systems still have room to improve accuracy and appeal of their issues.  
In this sense, Tykli Forward can provide a powerful tool for profiling users and personalizing suggestions for them, adding this enormous number of external inputs to information that traditionally helps understand users' behaviour and exploring and ranking relationships that lie among them.  
Recommendation systems work, for instance, behind user interfaces of on demand tv channels, music web players, news sites and e-commerces, and no matter what the user looks for, this kind of activity falls into what we define as complex systems, as we have: 
- many subjects who act as interconnected parts, i.e. thousands of surfers who, through their computers, search, listen and recommend to other users the latest rock songs, even outside the borders of the specific website where they do that;
- a continuous flow of information moves from users to content providers and back, but also among users and even, as told before, in less predictable directions thanks to the digital interactions we do everyday in a number of different channels; 
- a big amount of data to deal with that is generated in this way.

Until now, recommendation systems process different sorts of information about consumers: who and how many they are, which kind of subscription they have chosen, which types of items they prefer, when they make use of them during the day, how much time they spend examining each one. The same goes for products: price, features, internal rating. But now they are not enough: relationships among these elements and all those that come from the surrounding contexts are fundamental.

Tykli Forward, beyond collecting this sort of information, connect them, considering what happens in all environments where users act. So, not only the fact that one client buys a movie, sees it once at evening time, but also that two days later he comments on Twitter about the main actor who is making a new film in his own city and then several people join the discussion, commenting and suggesting another movie. Giving a different value for each of these connections, it enhances the skill to better predict what users like; at the same time, it works in a flexible way because the relevance of every relationship discovered varies according to the starting point of researches. 

Don't hesitate to contact us to know more.

Next June 13, 2013 in Milano will take place the third edition of ANALYTICS, organized by the Innovation Group with the participation of top keynote speakers such as Michael Mandel, chief economic strategist from Wharton's Mack Center for technological innovation.

The agenda is crowded with very interesting topics, from the new data driven economy to Big Data, web and democracy.

Lorenzo Verna, Tykli co-founder, will participate to a panel of the session on BIG DATA & ANALYTICS: EMERGING TRENDS AND RISING STARS PLAYER IN THE ITALIAN MARKET.

Looking for the hottest job in the world? Well do not look further. Data specialist/scientist is a high ranking professional profile that companies are struggling to find. And yet, as McKinsey declares,
"by 2018, the United States alone could face a shortage of  1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decision". 

Don't miss the opportunity and start building your future with us.

Tykli is proudly announcing the partnership with Politecnico di Torino for the brand new MASTER IN DATA ENGINEERING.

Pimp up your CV, check out the requirements and submit your application. We look forward to meet you!

Lorenzo Verna, co-founder and CEO of Tykli was guest on 2024, the program of Radio 24 hosted by Enrico Pagliarini.  
The podcast is here

Saturday 24, January, at the headquarter of  Il Sole 24 Ore the winner of the contest Prendi parte al cambiamento, organized by ING Direct, has been announced. We are very happy to share that it is Tykli. 

Tykli was selected for "the great innovative value and its relevance" among more than 500 startups. 

We want to thank ING Direct, Il Sole 24 Ore and H-Farm Ventures for supporting our startup.

Here there is a selection of the articles who talked about Tykli and the contest.

Tykli is one of the three finalists in the contest Prendi parte al cambiamento.
The contest is sponsored by Ing Direct in collaboration with Gruppo24Ore and H-Farm

The winner will be announced on Thursday, January 24, 2012, stay tuned. 

BigData Event In Turin

Nov 06, 2012
We are proud to be a technical sponsor of the 2012 edition of BigDive.
BigDive is an event about big data and the technologies around the world of data. 
The event will be held in Turin at the Toolbox coworking space. 


We have updated our SPARQL editor, *Ql, with a totally renovated user interface.
The main changes are:

Prefixes Editor
First. you have a more clear and usable way to select and add namespace prefixes. The namespace prefixes are now listed in a collapsable text area, 
you can define them once and then hide the text area to save space on the screen. The checkbox fields that were used to select the most common prefixes have been converted to a select field that shows bot namespace uri and prefix, making it clearer which namespace are you actually using.

Query Editor
We have found out that one query editor was not enough. while you are building a complex query, or exploring a new endpoint, it happens a lot that you write smaller query to inspect entities or properties. With the old interface, you could write more than one query and then select the query you wanted to use and run just the selected text. 
We found out that this approach required a lot of mouse and keyboard work, and we opted out to have more than one editor to work with. Every editor is collapsable and all of them inherit the prefixes 

Common Operator
The run button is now on the header of each query, making it easy to understand which query you are about to run. Beside the run button we have added some common operator like limit, order by and group by. 

Format Query
When working on a query, the situation can start to be messy. We know that, and so we added a format query button that will try to format and indent your query.

Show Full Query
We added a button to open a modal where your query will be printed in full version, with prefixes, query body and operators.

*Ql is available here.