Blog

Plotly: Data Analytics & Visualisation tool

Plotly is a data analytics and visualization tool. Plotly provides online graphing, analytics, and statistics tools for individuals and collaboration, as well as scientific graphing libraries for Python, R, MATLAB, Perl, Julia, Arduino, and REST.

Plotly’s main products include:

  1. Dash is an open-source Python framework for building web-based analytic applications.
  2. Dash DAQ is a non-open-source package for building data acquisition GUIs to use with scientific instruments. It is built on Dash.
  3. Plot.ly has a graphical user interface for importing and analyzing data into a grid and using stats tools.
  4. API libraries for Python, R, MATLAB, Node.js, Julia and Arduino and a REST API. Plotly can also be used to style interactive graphs with Jupyter notebook.
  5. Figure Converters which convert matplotlib, ggplot2,  and IGOR Pro graphs into interactive, online graphs.
  6. Plotly Apps for Google Chrome.
  7. Plotly.js is an open source JavaScript library for creating graphs and dashboards.
  8. Plotly Enterprise an on-premises installation of Plotly

follow the link: https://plot.ly/#/

 

Tool of the Week – RAW Graphs

RAW Graphs is an open source data visualisation framework built with the goal of making the visual representation of complex data easy for everyone. Primarily conceived as a tool for designers and vis geeks, RAW Graphs aims at providing a missing link between spreadsheet applications (e.g. Microsoft Excel, Apple Numbers, OpenRefine) and vector graphics editors (e.g. Adobe Illustrator, Inkscape, Sketch). The project, led and maintained by the DensityDesign Research Lab (Politecnico di Milano) was released publicly in 2013 and is regarded by many as one of the most important tools in the field of data visualisation. After a couple of years, the involvement of Contactlab as a funding partner brought the project to a new stage. DensityDesign and Calibro can now plan new releases and ways to involve the community.

Top Links of the Week:

Here guys these are my top links of the week:

1. Data Structures and Algorithms in JavaScript – Full Course for Beginners: https://www.freecodecamp.org/n/EWd2k87

2. Largest programming and computer courses on the website: https://medium.freecodecamp.org/f0bd3a184625

3. GitHub was recently acquired by Microsot – learn why – : https://news.microsoft.com/2018/06/04/microsoft-to-acquire-github-for-7-5-billion

Happy Learning!

Text Mining

Text mining, also referred to as text data mining, roughly equivalent to text analytics, is the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. ‘High quality’ in text mining usually refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).

Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis, visualization, and predictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via application of natural language processing (NLP) and analytical methods.

A typical application is to scan a set of documents written in a natural language and either model the document set for predictive classification purposes or populate a database or search index with the information extracted.

Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work.

"The Fish-Footman began by producing from under his arm a great letter, nearly as large as himself."

Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work.

You might be looking for the documentation for Beautiful Soup 3. If so, you should know that Beautiful Soup 3 is no longer being developed, and that Beautiful Soup 4 is recommended for all new projects. If you want to learn about the differences between Beautiful Soup 3 and Beautiful Soup 4, see Porting code to BS4.