Artificial Intelligence – Intellectsoft Blog https://www.intellectsoft.net/blog Mon, 18 Nov 2019 18:19:28 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.4 https://www.intellectsoft.net/blog/wp-content/uploads/cropped-favicon-1-32x32.png Artificial Intelligence – Intellectsoft Blog https://www.intellectsoft.net/blog 32 32 What Is LegalTech: Overview + Real-Life Use Cases  https://www.intellectsoft.net/blog/what-is-legaltech/ https://www.intellectsoft.net/blog/what-is-legaltech/#respond Wed, 14 Aug 2019 15:38:29 +0000 https://www.intellectsoft.net/blog/?p=17051 Everything you need to know about the current state of legaltech.

The post What Is LegalTech: Overview + Real-Life Use Cases  appeared first on Intellectsoft Blog.

]]>
Legal technology, or legaltech (less often lawtech), is software and technologies that help law firms streamline core processes, like practice and document management, billing and accounting, and e-discovery. 

Legaltech emerged as a response to the challenges the industry faced following the financial crisis of 2008. They included in-house lawyers’ greater demands on firms, pressure to cut costs, lack of standardization in client procurement across departments, attracting and charging clients in the competitive market after the recession, as well as competition from new companies that rejected the traditional law model. On top of that, law firms needed to plow through vast amounts of data since email communication has started becoming the new normal in the industry. 

Over the years, legaltech has grown to encompass, on the one hand, the software and technology tools that simplify and streamline law practice for lawyers, and on the other, digital tools that simplify the acquisition and management of law services for clients by reducing or eliminating the need to consult a lawyer (or making it easier to find the right one quickly). 

Technologies & Application Areas

On a higher level, legaltech is the industry’s way of performing digital transformation, that is, using software and technologies to simplify operations, become leaner, cater to modern customers, etc. 

Legaltech encompasses the following software solutions and technologies:

  • Workflow software tools
  • Artificial Intelligence/Machine Learning (AI/ML) algorithms
  • Analytics & big data software tools
  • Customer experience (CX) solutions
  • Cloud technology
  • Distributed ledger technology (DLT)

Legaltech Areas

  • AI/ML document review – data-trained algorithms that analyze legal documents across many areas, from risk management to M&A to compliance cases.
  • Document and contract management platforms — tools that streamline or automate creating templates, negotiating clauses, and analyzing content. 
  • Workflow tools – systems that allow lawyers to digitalise and automate their workflows.
  • Smart contracts – self-executing clauses in legal contracts enabled by DLT, Internet of Things (IoT), and other. 
  • E-discovery tools — solutions that simplify and automate e-discovery stages, for example machine learning algorithms that collect and process documents and prepare it for a lawyer to review.
  • Legal chatbots – automated rule-based tools enabled by AI/ML that allow users to get answers to basic legal inquiries in a chat or messenger.
  • Online marketplaces — digital platforms that help clients find the right lawyer faster.
  • Cloud-based databases — data is gathered in a single data warehouse, where it can be accessed by employees across departments.
  • Data security — ensuring the highest level of security for legal records with DLT.

Legaltech vs Lawtech — What Is the Difference?

Some experts reckon legaltech and lawtech serve different purposes, with the first referring to software and technologies lawyers use to streamline their work (e-discovery, AI/ML document review), and the second encompassing products for their clients (legal chabots, online marketplaces). Other lawyers reckon lawtech is not a fitting name for the field, as it implies the subject of law in general rather than an industry. In different publications, lawtech and legaltech are used interchangeably, but legaltech more often still. As of now, using both makes equal sense; time will tell which of terms will stick. 

Use Cases: What Legaltech Benefits Firms Can Expect

While legal tech is a nascent field, law firms are already leveraging what it has to offer, digititalizing their workflows, improving customer experiences, helping clients to return virtual money, and more. 

Big Data & Cloud — M&A & Portfolio Management

Looking for a way to review large patent portfolios, lawyers from Bird & Bird have teamed up with their client Nokia (the technology company) and a range of experts from different fields for a solution. Their joint effort resulted in Pattern — a cloud-based big data tool that simplifies M&As and client portfolio management and includes details of every patent ever published. The tool proved efficient and was rolled out to Bird & Bird’s clients in 2017.

Mobile — Workflow & Document Management 

One of our clients, a long-established U.S. law firm, needed to transfer their workflows and document management as well as minimize the heavy paperwork common to a law firm. 

A comprehensive and handy mobile app proved to be the fitting solution. It empowered the firm’s lawyers with the following features: 

  • Immediate access to сritical legal documents; files and documents can be securely shared between the firm’s lawyers
  • Attorneys can send bills and receive payments easily
  • HR center with powerful employee management tools
  • Notification center: clients and lawyers are always notified about changes and updates
  • White Label solution

Artificial Intelligence Document Analysis

Artificial intelligence (AI) is poised to free the legal industry from the massive amounts of paperwork. A good example comes from Luminance — an AI platform for due diligence, insurance, contract, and compliance management. The platform uses computer vision, machine learning (ML) algorithms, and probability statistics to analyze documents much quicker than a human. Since it launch in 2015, the AI platform gained wide recognition among law firms. 

Customer Experience Management

CX platforms can make a big difference in a legal firm’s operations. In one of our projects, we helped a U.S. law firm fully digitalize their legal consulting with such a platform. It consists of two parts. Customers use a mobile app to request legal help by creating tickets with basic information and attached documents. On the other side, the firm’s representatives and employees are assigned with tickets in a web-based interface with a simple workflow (in style of JIRA) to manage those requests. If the representative cannot address the issue, it is automatically passed further to the firm’s in-house experts. The platform also gathers data, accumulating a knowledge base to simplify the process of solving similar issues in the future for every representative and in-house lawyer.

Illegal Transactions Tracking with DLT

While the legal industry heavily relies on tradition, it already started leveraging what distributed ledger technology (DLT) has to offer. Josias Dewey, a partner at Holland & Knight and a software engineer, helps the firm’s clients create DLT proof-of-concepts applications. For example, he helped develop a DLT tool that helps track and freeze illegal virtual currency transactions, helping a client to return $500,000 lost as the result of a hack. 

Find out how legaltech can help your firm become more effective. Schedule a consultation with our experts.

The post What Is LegalTech: Overview + Real-Life Use Cases  appeared first on Intellectsoft Blog.

]]>
https://www.intellectsoft.net/blog/what-is-legaltech/feed/ 0
Facial Recognition in Retail & Hospitality: Cases, Benefits, Laws https://www.intellectsoft.net/blog/facial-recognition-in-retail-and-hospitality/ https://www.intellectsoft.net/blog/facial-recognition-in-retail-and-hospitality/#respond Wed, 17 Apr 2019 18:05:40 +0000 https://www.intellectsoft.net/blog/?p=16555 Explore how top brands leverage face recognition, and how it is regulated.

The post Facial Recognition in Retail & Hospitality: Cases, Benefits, Laws appeared first on Intellectsoft Blog.

]]>
Facial recognition is a technology that combines sophisticated software driven by artificial intelligence (AI) algorithms with cameras to collect data about a person’s age, gender, and ethnicity. Popularized by Apple’s iPhone X and already used in law enforcement and airports, the technology is gradually (but stealthily) becoming part of customer experiences in retail spaces and hotels.

How are industry-leading companies using the technology? What benefits do they plan to gain by implementing facial recognition? Is the technology regulated?

Join us as we explore these questions below.

1) Facial Recognition in Retail

CaliBurger — Loyalty Program, In-store Experience

The Californian burger company CaliBurger installed kiosks with facial recognition capabilities at one of their restaurants in Pasadena. Linked to CaliBurger’s loyalty program, the kiosks recognize customers when they approach, load their loyalty accounts, and show favorite and past orders. The customers do not need to add any account details or swipe their cards. As a result, Caliburger is able to remove friction from the ordering process almost entirely, speeding it up significantly.

As of now, the kiosks are in the testing mode. Caliburger plans to roll out the technology in other restaurants over time. “Facial recognition is part of our broader strategy,” commented CEO of Cali Group John Miller in a statement.

See the video:

Caliburger’s facial recognition technology works only with prior customer consent, and the company stresses that they do not store the facial images.

7-Eleven — Advanced Facial Recognition in Thailand

In March 2018, 7-Eleven installed facial recognition software into 11,000 stores in Thailand. Coupled with behaviour analytics, the solution is used to track loyalty cardholders, monitor customer traffic and stock levels, suggest products for purchase, as well as recognise the emotions of shoppers.

Thailand’s population reaches almost 69 million people, and 10 million of them go to a 7-Eleven store each day. It is fair to say that the company is planning to get a substantial ROI on the technology, and improve the numbers. The implementation might also be connected to the current rise of facial recognition technology across Asia, especially in China, where people can use it to buy products, withdraw cash from ATMs, as well as get a loan.

Saks Fifth Avenue — Organized Retail Crime Prevention

An early adopter of facial recognition technology, luxury brand Saks installed it into its flagship store in Toronto’s Eaton Centre to fight organized retail crime/loss prevention. The facial recognition algorithms track the store visitors and convert’s photos of suspects into biometric templates, checking them against a database of registered shoplifters. The solution allows for accessing the cameras remotely from Saks’s New York headquarters.  

Here is an example of how facial recognition technology works in crime detection:

Facial recognition

Source: Iowa Department of Transportation

Walmart, Samsung, Intel, Home Depot

Home Depot has merged their marketing and security departments for a facial recognition project that would provide insight into shoppers’ in-store product browsing. Michael Weidmann, the consultant for the project, said that no company had ever considered linking the two departments before, but the number has been growing since. With Amazon leading the way with their highly sophisticated and automated Go shops, companies will leverage facial recognition in retail to attract more customers to physical stores in the future.

Meanwhile, Walmart patented a technology with AI facial recognition that enables cameras to capture the facial expressions of shoppers in checkout lines to measure the degree of dissatisfaction with service. The results may help the company improve the in-store experience, including in-store displays, real-time promotions, and other issues.

In general, the use of facial recognition in retail is on the rise. At the National Retail Federation 2018, Samsung and AT&T showed how they plan to use it to get insights into demographics, store traffic customer behaviour patterns, and acquire other data.

Meanwhile, Intel showed a mock-up of a candy store in which a camera recognized recurring shoppers with the help of AI facial recognition, and then informed the store associates about their names, previous purchases, and product recommendations. Properly established, such facial recognition tool could empower the store associates and raise in-store customer satisfaction significantly.

2) Facial Recognition in Hospitality

Accor Hotels — Check-In, Hotel Experience

In 2018, Accor Hotels started trialing facial recognition technology in their Pullman hotel in São Paulo, Brazil to see how it can improve the guest experience. The opportunity was given to selected guest that are part of the loyalty program.

The solution covers key points in Accor’s hotel experience. Guests can use facial recognition to check-in at the desk or an information pillar at the reception as well as use the technology to access their rooms. As a result, the keycards are out of the equation as the guest experience becomes more swift and streamlined.

Accor Hotels lists deeper personalization and standing out from competition as other facial recognition benefits, according to Erwan Le Goff, the company’s vice-president for information technology for South America.

“We realize this technology is a big market differentiator and will be utilized by the sector in the years to come. ” said Mr. Le Goff.

Marriott International — Check-in Kiosks at the Desk

Around the same time in 2018, Marriott announced they are bringing facial recognition to the check-in desk in the form of kiosks. The chain started testing the technology at their two properties in China — Marriott Hangzhou and Marriot Sanya (located on island often referred to as “The Hawaii of China”). The project is also a joint effort with e-commerce giant Alibaba.

Facial recognition in the hospitality industry

“Marriott International has a track record of embracing cutting-edge technology to create memorable experiences for guests,” said Henry Lee, Chief Operations Officer and Managing Director of Marriott International Greater China.

It looks like the chain considers the implementation of facial recognition in hospitality as the next important step for the industry. If the current effort finds its success, Marriott plans to roll out the technology to all of their 6,500 properties — although no definite milestones were set.  

Other Hotel Chains — VIP Identification

When it comes to broad-based usage, facial recognition in the hospitality industry is in its young years. On top of that, it is not properly regulated yet (we will discuss this in a moment). As a result, few companies are willing to voice their efforts, let alone sit down and elaborate on them.

Still, the Guardian reports that a number of high-end hotels in Europe are allegedly using facial recognition to identify VIPs and celebrities for preferred treatment when they enter the front door.   

3) The State of Facial Recognition Technology Regulation

In general, it can be said that the facial recognition technology is not regulated. Only the states of Illinois and Texas in the United States come as exceptions. Still, the gears that facilitate the emergence of corresponding laws are already in motion, and the conversation in general has already started.  

“The technology is in some environments where I’m sure millions of people, in a year, or even in a month, are subjected to it,” said Donna Lieberman, executive director of the New York Civil Liberties Union, to New York Magazine. “Nobody has any idea that it’s happening, or what data is being collected, or how it’s being stored, or for how long, or who has access to it.”

Facial recognition technology

Ritchie Torres, a New York City Councilman representing Bronx, introduced a bill to move the needle on the issue. It would require a business to start informing the public in case they are using the facial recognition, how long they store the data gathered by it, and who they share this data with. The legislation effort did not get traction yet, and the magazine argues that it will face serious lobbying efforts from the tech industry — citing lack of success with similar laws in Alaska, Connecticut, Montana, New Hampshire, and Washington. Trade groups and tech giants like Google and Facebook are at the forefront of the opposition, the latter being especially aggressive on the issue.

Geoff White, a privacy specialist with the Public Interest Advocacy Centre in Ottawa told The Guardian that companies using facial recognition technology have an obligation to be more transparent.

Pushing tech companies to become more so and hoping to get a resolution should not be viewed as an impossible feat. After all, privacy is paramount today, and those companies that fail to follow through eventually risk losing their clients, while also losing image points along the way.


 

The post Facial Recognition in Retail & Hospitality: Cases, Benefits, Laws appeared first on Intellectsoft Blog.

]]>
https://www.intellectsoft.net/blog/facial-recognition-in-retail-and-hospitality/feed/ 0
Big Data Tools, Processes & Frameworks | In-Depth Guide https://www.intellectsoft.net/blog/big-data-tools-processes-and-frameworks-in-depth-guide/ https://www.intellectsoft.net/blog/big-data-tools-processes-and-frameworks-in-depth-guide/#respond Tue, 20 Nov 2018 16:23:58 +0000 https://www.intellectsoft.net/blog/?p=15916 Use our guide to start mapping out your Big Data solution.

The post Big Data Tools, Processes & Frameworks | In-Depth Guide appeared first on Intellectsoft Blog.

]]>
 

Big Data has long become a default setting for most IT projects. Whether it is an enterprise solution for tracking compactor sensors in an AEC project, or a e-commerce project aimed at customers across country — gathering, managing, and then leveraging large amounts of data is critical to any business in our day and age.

In the first aforementioned scenario, we have a massive amount of data from compactor sensors that can be used for algorithms training and AI inference deployed on the edge. Also, one partly autonomous compactor equipped with the right sensor suite could generate up to 30 TB of data daily.

As for the second case, a countrywide e-commerce solution would serve millions of customers across many channels: mobile, desktop, chatbot service, assistant integrations with Alexa and Google Assistant, and other. The solution would also need to supports delivery operations, back-end logistics, supply chain, customer support, analytics, and so on.

These and many other cases involve millions of data points that should be integrated, analyzed, processed, and used by various teams in everyday decision making and long-term planning alike.

Thus, before implementing a solution, a company needs to know which of Big Data tools and frameworks would work in their case.

In this guide, we will closely look at the tools, knowledge, and infrastructure a company needs to establish a Big Data process, to run complex enterprise systems.

From the database type to machine learning engines, join us as we explore Big Data below.

Big Data & The Need for a Distributed Database System

In the old days, companies usually started system development from a centralized monolithic architecture. The architecture worked well for a couple of years, but was not suitable for the growing number of users and high user traction.

big data technologies and tools

Centralized System

 

Then, software engineers started scaling the architecture vertically by using more powerful hardware increasing — with more RAM, better CPUs, and larger hard drives (there were no SSDs at that moment in time).

When the system got more load, the app logic and database could be split to different machines.

tools for big data

Multiple Apps and Databases

 

After some time, we proceeded with app logic and database replication, the process of spreading the computation to several nodes and combining it with a load balancer.

All this helped companies manage growth and serve the user. On the other hand, the process increased the cost of infrastructure support and demanded more resources from the engineering team, as they had to deal with failures of nodes, partitioning of the system, and in some cases data inconsistency that arose from misconfigurations in the database or bugs in application logic code.

At this point, software engineers faced the CAP theorem and started thinking what is more important:

a) Consistency: every read always receives the most recent write or error, but never the old data.

b) Availability: every request receives a response, but does not guarantee that it contains recent data.

c) Partition Tolerance: the system continues to operate despite an arbitrary number of messages being dropped (or delayed) by the network between nodes.

In particular, the CAP theorem states that it is impossible for a distributed data store to simultaneously provide more than two of the above guarantees.

But usage continued to grow and companies and software engineers needed to find new ways to increase the capacity of their systems. This brings us to the realm of horizontally scalable, fault-tolerant, and highly available heterogeneous system architectures.

Horizontally Scalable Infrastructure

 

Distributed File System (DFS) as a main storage model

Google File System (GFS) served as a main model for the development community to build the Hadoop framework and Hadoop Distributed File System (HDFS), which could run MapReduce task. The idea is to take a lot of pieces of heterogeneous hardware, and run a distributed file system for large datasets.

As the data is distributed among a cluster’s many nodes, the computation is in the form of a MapReduce task. MapReduce and others schedulers assign workloads to the servers where the data is stored, and which data will be used as an input and output sources — to minimize the data transfer overhead. This principle is also called data locality.

Hadoop clusters are designed in a way that every node can fail and system will continue its operation without any interruptions. There are internal mechanisms in the architecture of the overall system that enable it to be fault-tolerant with fault-compensation capabilities.

best big data tools

Distributed HDFS Architecture

 

Files stored in HDFS are divided into small blocks and redundantly distributed among multiple servers with a continuous process of balancing the number of available copies according to the configured parameters.

Here is the list of all architecture assumptions of HDFS architecture:

  • Hardware failure is a norm rather than an exception
  • Streaming data access
  • Large data sets with a typical file as large as gigabytes and terabytes
  • Simple coherency model that favors data appends and truncates but not updates and inserts.
  • Moving computation is cheaper than moving data
  • Portability across heterogeneous hardware and software platforms

Hadoop HDFS is written on Java and can be run on almost all major OS environments.

The number of nodes in major deployments can reach hundreds of thousands with the storage capacity in hundreds of Petabytes and more. Apple, Facebook, Uber, Netflix all are heavy users of Hadoop and HDFS.

HBase: More Than A Distributed Database Architecture

But in order to improve our apps we need more than just a distributed file system. We need to have a database with fast read and write operations (HDFS and MapReduce cannot provide fast updates because they were built on the premise of a simple coherency model).

Again, Google has built BigTable, which has a wide-column database that works on top of GFS and features consistency and fast read and write operations. So, the open-source community has built HBase — an architecture modeled after BigTable’s architecture and using the ideas behind it.

HBase a NoSQL database that works well for high throughput applications and gets all capabilities of distributed storage, including replication and fault and partition tolerance. In other words, it is a great fit for hundreds of millions (and billions) of rows.

popular big data tools

HBase Architecture on top of the Hadoop (Source)

 

Cassandra

There is also Cassandra, an evolution of HBase that is not dependent on HDFS and does not have a single master node. Cassandra avoids all the complexities that arise from managing the HBase master node, which makes it a more reliable distributed database architecture.

Cassandra is also better in writes than HBase. More so, it better suits the always-on apps that need higher availability. Remember the CAP theorem and trade-off between consistency and availability?

Machine Learning Engines & Tools for Big Data Analytics

 

Hive

Hive is one of the most popular Big Data tools to process the data stored in HDFS, providing reading, writing, and managing capabilities for stored data. Other important features of Hive are providing the structure on top of stored data and using SQL as the query language.

Hive’s main use cases involve data summarization and exploration, which can be turned into actionable insights. The specialized SQL syntax is called HiveQL, and it is easy to learn for one who is familiar with the standard SQL and the notion of key-value nature of the data, rather than standard relational RDBMS. However, for highly concurrent BI workloads, it is better to use Apache Impala, which can work on top of Hive metadata but with more capabilities.

Spark

The best Big Data tools also include Spark. Spark is a fast in-memory data processing engine with an extensive development API that allows data workers to efficiently execute streaming, machine learning, and SQL workloads with fast iterative access to stored data sets.

Spark can be run in different job management environments, like Hadoop YARN or Mesos. It is also available in a Stand Alone mode, where it uses built-in job management and scheduling utilities.

YARN

YARN is a resource manager introduced in MRV2, which supports many apps besides Hadoop framework, like Kafka, ElasticSearch, and other custom applications.

Spark MLlib

Spark MLlib is a machine learning library that provides scalable and easy-to-use tools:

  • Common learning algorithms — classification, regression, clustering, and filtering
  • Feature extraction pipelines, transformation, dimensionality reduction
  • Persistence for algorithms pipelines & linear algebra and statistics utilities

KNIME Analytics Platform

KNIME is helpful for visualization of data pipelines and ETL processing via modular components. With minimal programming and configuration, KNIME can connect to JDBC sources and combine it in one common pipelines.

Presto SQL

Interactive features of distributed data processing can be achieved with Presto SQL query engine that can easily run analytics queries against gigabytes and petabytes of data. The tool was developed at Facebook, where it was used on a 300 PB data warehouse with 1000 employees working in a tool daily and executing 30000 queries that in total scan up to one PB each daily. This puts Presto high up in the list of solid tools for Big Data processing.

Big Data Tools for Streams & Messages

Another modality of data processing is handling data as streams of messages. This typically involves operations connected to data from sensors, ads analytics, customer actions, and high volumes of data from sensors like cameras of LiDARs from autonomous systems.

Kafka

Kafka is currently the leading distributed streaming platform for building real-time data pipelines and streaming apps.

The use cases include:

  • Messaging: traditional message broker pattern of data processing
  • Website Activity Tracking: real-time publish-subscribe feeds in domains of page views, searches, and other user interactions.
  • Metrics: operational monitoring data processing.
  • Log Aggregation: collecting physical log files and store them for further processing.
  • Stream Processing: multi-stage data processing pipelines.
  • Event Sourcing: support of apps built with stored event sequences that can be replayed and applied again for deriving a consistent system state.
  • Commit Log: the type of data stored in distributed system that ensures the re-syncing mechanism.

tools used for big data

Kafka Streams Architecture (Source)

 

Apache Storm

Apache Storm is a distributed stream processor that further processes the messages coming from Kafka topics. It is common to call Storm a “Hadoop for real-time data.” This distributed database technology is scalable, fault-tolerant, and analytic.

Apache NiFi

For intuitive web-based interface that supports scalable directed graphs of data routing, transformation, and system mediation logic, one can use Apache NiFi. It is also simpler to get quick results from NiFi than from Apache Storm.

big data process

NiFi Web Interface

 

Conclusion

The modern big data technologies and tools are mature means for enterprise Big Data efforts, allowing to process up to hundreds of petabytes of data. Still, their efficiency relies on the system architecture that would use them, whether it is an ETL workload, stream processing, or analytical dashboards for decision-making. Thus, enterprises should to explore the existing open-source solutions first and avoid building their own systems from ground up at any cost — unless it is absolutely necessary.

If you need help in choosing the right tools and establishing a Big Data process, get in touch with our experts for a consultation.

About the Author

Pavlo Bashmakov is the Research & Development Lead @ Intellectsoft AR Lab, a unit that provides AR for construction and other augmented reality solutions.

The post Big Data Tools, Processes & Frameworks | In-Depth Guide appeared first on Intellectsoft Blog.

]]>
https://www.intellectsoft.net/blog/big-data-tools-processes-and-frameworks-in-depth-guide/feed/ 0
Types of Data Analytics | Your 5-Minute Guide https://www.intellectsoft.net/blog/types-of-data-analytics/ https://www.intellectsoft.net/blog/types-of-data-analytics/#respond Mon, 29 Oct 2018 15:51:50 +0000 https://www.intellectsoft.net/blog/?p=15851 We explore different types of analytics and provide examples along the way.

The post Types of Data Analytics | Your 5-Minute Guide appeared first on Intellectsoft Blog.

]]>
 

Modern businesses increasingly rely on data to succeed, taking on the “data-centric” tag. As large amounts of data allows companies and their products evolve quickly efficiently to address the demands of customers, the trend will continue to strengthen.

Hence, we created a brief guide about the types of data analysis that explains the 4 types of data analytics, what purpose they serve, and what technology and type of talent they rely on. The guide is also supplied with examples of data analytics from across industries for additional context.

Let’s start exploring the types of analytics below.

Descriptive Analytics

What is descriptive analytics: a preparatory stage in data processing that summarises data from past periods to provide insights and prepare the gathered data for future analysis.

Descriptive Analytics Examples

For instance, a hotel chain would use descriptive analytics to determine the level of demand for new VIP suites in a hotel. Similarly, an insurance company can use descriptive data analytics to see what services are the most popular in a given season, while an online retailer can find out the least popular products from new arrivals.

Descriptive Analysis Steps

  1. Determine business metrics against set goals and establish KPIs to monitor progress
  2. Gather and prepare the data. When you finished gathering the right data from the enterprise systems, you need to use descriptive analytics techniques to prepare it (for example, deduplication — eliminating duplicate copies of repeating data)
  3. Dedicated experts (data analytics, Business Intelligence experts) perform the analysis of the prepared data
  4. Visualization/Presentation. The data is displayed to shareholders in various charts, graphs, narratives, or tables 

4 types of data analytics

Diagnostic Analytics

What is diagnostic analytics: a stage where the information gathered during descriptive analysis is compared against other metrics to find out why something happened.

Finding Anomalies with Diagnostic Analysis

Diagnostic analysis allows companies to identify anomalies, for examples sudden spikes in sales on a given day or torrential changes in website traffic. Here, data analysts need to single out the right data sets to help them explain the anomaly. Searching for the answer often involves drawing information from external sources. When the needed data is on the table, the analysts establish causal relationships and use different types of data analytics (probability theory, regression analysis, filtering, and other) to find the answer.

Diagnostic Analytics Examples

With diagnostic analytics, a hotel chain would compare the demand for VIP suites in different regions or hotels in a region, while the insurance company would, for example, get insights into what age group uses dental treatment the most in the target area. Meanwhile, an online retail store might use diagnostic analytics to discern what regions ordered a particular product from new arrivals more.

Types of analytics

Predictive Analytics

What is predictive analytics: a data analysis type that allows companies to forecast problems that might occur in future or a trend and how would be unfolding.

Predictive Analytics Examples

With this type of analysis, a hotel could predict how much a new promising guest service would bring in revenue in a given region. A retailer could use in-depth data on their customers and other metrics to forecast a reaction on a new store type.

Predictive analysis involves advanced tools and technologies and should be based on a big amount of solid data (internal and external) to yield a reliable result. Importantly, achieving an effective forecast depends on a wide array of factors, including the level of volatility of the situation. Predictive analytics also require continuous involvement from the data science team.

Predictive Analytics Techniques

  • Statistical analysis
  • Neural networks
  • Machine learning
  • Data mining

Today, predictive data analytics heavily rely on technologies like machine learning, as they can process large data sets quickly. Still, for the analysis to succeed, data science teams and software developers need to create predictive data algorithms for the business goal in question. This requires collaboration with key stakeholders and business analysts. Once again, the quality of the data for the analysis is critically important.

Prescriptive Analytics

What is prescriptive analytics: a data analysis type that uses advanced technology heavily to find the best solution based on data provided from predictive analytics. Thus, prescriptive analytics would determine what a company could do with a problem or trend foreseen by predictive analytics. Like predictive analytics, prescriptive analysis needs its own business logic and algorithms. As for prescriptive analytics techniques, machine learning is one of the most common.  

Prescriptive Analytics Examples

From one side, prescriptive analytics techniques can be used to gain highly rich insights in customer behaviour across industries. On the other, machine learning algorithms can be trained to analyse stocks markets and automate human decision making by presenting decisions based on large amounts internal and external data. In any case, prescriptive analytics are a costly investment: the investors need to be confident that the analysis yields substantial benefits.

Types of Data Analytics — Conclusion

Information is one of the most valuable business assets of today. Various types of data analytics allow businesses to improve their operations and customer experiences, providing insights and a clearer picture on the business in general. Relying on extensive experience of top management and their employees, modern companies would mostly rely on descriptive and diagnostic analytics to aid human decision making in the following. Still, as the role of data only grows and the analysis tools evolve, companies that want to be ahead of competition will also use predictive and prescriptive analytics to stay ahead and automate a number of operations.

Intellectsoft can help you create or improve your data strategy further. Schedule a consultation with our IT advisors.

The post Types of Data Analytics | Your 5-Minute Guide appeared first on Intellectsoft Blog.

]]>
https://www.intellectsoft.net/blog/types-of-data-analytics/feed/ 0
What is AI: Understanding Deep Learning https://www.intellectsoft.net/blog/ai-understanding-deep-learning/ https://www.intellectsoft.net/blog/ai-understanding-deep-learning/#respond Mon, 30 Oct 2017 15:23:33 +0000 https://www.intellectsoft.net/blog/?p=4417 Everything you need to know about DL, without diving into software engineering. Deep learning is set to take us to a technologically advanced, automated future of self-driving cars and robotic assistants.

The post What is AI: Understanding Deep Learning appeared first on Intellectsoft Blog.

]]>
Intellectsoft has been offering AI-based software solutions, so we have started a series of blog posts to shed a light on what AI is, its applications, as well as how to implement it successfully in the enterprise. The previous post was a case-driven guide to machine learning.

Deep learning is set to take us to a technologically advanced, automated future of self-driving cars and robotic assistants. However, what it is and how it works still remains a subject significantly more complex than most users imagine.  

Join us, as we take a closer look at deep learning without going to the neighboring territories of mathematics and software engineering.

What is deep learning

Deep learning is a subfield of machine learning — a type of data analysis that uses self-learning algorithms to analyse big data, learn from it, and eventually solve a problem, provide insights, or predict an outcome.

Nevertheless, deep learning employs algorithms that are fundamentally more complex than those employed in machine learning.

These self-learning algorithms are called deep neural networks.

Essentially, they are built to mimic biological neural networks of animals and humans to solve problems and tackle tasks of greater complexity, like driving vehicles and providing security using face recognition systems.

Deep learning - neural networks

How do these neural networks succeed?

Like human neural networks, deep neural networks are arranged in a hierarchical manner. More importantly, the layers of deep neural networks are also able to learn abstract features, allowing them to observe nuances in complex data blocks.

Therefore, artificial neurons can detect the smallest abstract details — patterns of low-level features.

If we take human face recognition as an example, a deep learning algorithm will not only be able to discern one face from another, but detect differences connected to the smallest details, like pores or wrinkles. This goes up to mid-level features like eye color, ear, and eyebrow shapes, and further on to high-level features, like discerning the differences in face shapes.

Thus, when it comes to the allocation of tasks in the enterprise of the future, machine learning algorithms will make valuable predictions in different departments based on business data; while deep learning will drive autonomous corporate vehicles and provide security with face recognition systems.  

Deep neural networks

Deep learning subtypes and applications

There are several subtypes of deep learning, and each of them tackles a different task.

Convolutional neural networks (CNNs) are designed for computer vision tasks — acquiring, analysing, and understanding digital images to extract high-dimensional information to make a decision. CNNs can be used for video tracking, object recognition, motion estimation, and more.

Recurrent neural networks are created with built-in memory, and are best suitable for language processing tasks, like spot-on translation from one language to another.

Q-learning is a subtype of deep learning that is built on an action-value relation.  An artificial intelligence agent determines the value of being in a specific state and taking an action in that state. For example, it will determine the consequences of the way it handles the controls of a car to turn or hit the brakes to avoid collision.

Deep learning - subtypes and applications

Policy learning allows the AI agent to learn an elaborately detailed set of instructions that shows the best possible action for a given state. This subtype of deep learning will be used in AI-based voice assistants on the front desks of enterprises.

Reinforcement learning. A reinforcement learning agent has to decide how to act to perform a task through the process of trial and error in a dedicated training environment. Earlier this year, Google’s AI-enabled program AlphaGO defeated the champion in GO — an abstract game that is more complex than chess. This became possible through reinforcement learning.

Even now, the promise of deep learning is grand — one should only remember the rigorous testing of Tesla’s self-driving cars. While consumers still have to wait—business doesn’t. Today, more and more enterprises are implementing deep and machine learning to gain a competitive edge.

The reasoning behind it is straightforward: in a data-driven corporate culture where you rely on big amounts of information, AI is a lifeline that helps enterprises make use of all their data quickly.

In the next post, we will explain how to successfully implement an AI-based solution in the enterprise.

Meanwhile, if you can’t postpone implementing an AI-based solution any longer, get in touch with us. We will help you with it from algorithm to implementation.

The post What is AI: Understanding Deep Learning appeared first on Intellectsoft Blog.

]]>
https://www.intellectsoft.net/blog/ai-understanding-deep-learning/feed/ 0
Your Concise Guide: What is Artificial Intelligence & Machine Learning https://www.intellectsoft.net/blog/artificial-intelligence-and-machine-learning-concise-guide/ https://www.intellectsoft.net/blog/artificial-intelligence-and-machine-learning-concise-guide/#respond Tue, 17 Oct 2017 14:15:18 +0000 https://www.intellectsoft.net/blog/?p=4384 A short, case-driven guide to AI and ML.

The post Your Concise Guide: What is Artificial Intelligence & Machine Learning appeared first on Intellectsoft Blog.

]]>
Artificial intelligence (AI) has firmly settled in the headlines of business publications and newsfeeds. Amazon went from machine learning (ML) algorithms that boost sales by improving recommendations to the quest of making the AI-powered Alexa speaker a ubiquitous device. Airbus is using artificial intelligence powered software to improve production with a self-learning algorithm that identifies patterns in production problems. Meanwhile, Coca-Cola is planning a location-sensitive vending machine with an assistant powered by artificial intelligence based technology.

The list will continue to grow, and it will grow fast. Paired with Big Data, AI-based software solutions are bringing precise automation and valuable insights to the majority of industries and business departments.

Still, the definition of artificial intelligence remains vague and, in many cases, complex.

As Intellectsoft has been creating AI-based software solutions, we are starting a series of posts about AI.

This post will answer the following questions: What is artificial intelligence? What is machine learning and what are its types? Finally, what are the three cornerstones of implementing machine learning algorithms?

what is artificial intelligence

What is Artificial Intelligence

In a broad sense, artificial intelligence is a field of study that uses a wide array of scientific, mathematical, and engineering methods to understand what is required for a machine to exhibit intelligence.

Concurrently, the much talked about machine learning and deep learning are essentially self-learning algorithms of different complexity.

They are part of the field of study of AI, but referring to them as “AI” is not accurate. The field encapsulates too many complex concepts; assigning the term to each separately creates confusion.

This is true for other general terms commonly used with self-learning algorithms — “software” and “technology.” These terms create further confusion by drawing associations with software development and everything that can be called a physical or digital technology. 

Leaving AI out of the name won’t be accurate either. After all, machine learning and deep learning algorithms are autonomous, coming up with their own decisions based on Big Data. More so, deep learning essentially mimics the neural networks of a human brain.

As machine learning, deep learning, and adjacent algorithms are commonly used in the context of IT, the most accurate way to encapsulate them in one term would be to call them AI-based software solutions (or AI-based solutions)

This term covers all the self-learning algorithms applicable in the enterprise without touching concepts like Artificial Super Intelligence (think Skynet and Elon Musk’s presentations about dangerous robots) and questions like AI ethics and values learning problem. 

Meanwhile, “machine learning software” has also gained traction in 2017; it is also correct, applying to software solutions driven by machine learning algorithms.

what is machine learning

Machine Learning Definition and Types

Machine learning is a type of data analysis that uses self-learning algorithms to analyse vast amounts of data, learn from the data, and then present a solution to a problem, provide insight, or make a prediction.

There are two types of machine learning algorithms.

In supervised learning, an algorithm runs on labeled data. This type is used to solve a problem or make a prediction.

Let’s look at an example:


Your enterprise wants to use machine learning to check the validity of your top managers’ signatures on contracts to avoid fraud.

You gather ten documents with signatures for each top manager (for a spot-on result, as every subsequent signature differs from the previous signatures and the ones that follow), label them with the corresponding names, and then run this data through a machine learning algorithm.

Having established the connection, the software that employs ML will analyze the contracts by itself, and let you know if someone tampered with a signature immediately.


In unsupervised learning, an algorithm runs on unlabeled data.

This type of ML is mostly used to derive insights by clustering data into groups by similarity and compressing it in size and dimensions.

For example, Netflix uses an unsupervised learning algorithm to deliver nuanced recommendations based on Big Data — what the viewers searched for, rated; the time, date, and device; as well as the browsing and scrolling behavior.

Eventually, four-star Oscar movies will be side-to-side with two-star comedies in the “Recommended for You” section. Carlos Gomez-Uribe, VP of product innovation and personalization algorithms at Netflix, provides a spot-on explanation for the machine learning algorithm in a Wired interview:


People rate movies like Schindler’s List high, as opposed to one of the silly comedies I watch, like Hot Tub Time Machine. If you give users recommendations that are all four- or five-star videos, that doesn’t mean they’ll actually want to watch that video on a Wednesday night after a long day at work. Viewing behavior is the most important data we have.


Thus, unsupervised learning avoids the “intuition failure” (recommending a four-star movie on a page of other four-star movies), taking advantage of vast amounts of raw data, and allowing for valuable insights. Gomez-Uribe and his colleague Xavier Amatriain (engineering director at Netflix) wrote an article on the topic where they assert that machine learning algorithms save Netflix one billion dollars a year.

artificial intelligence definition

Finally, what about custom machine learning algorithms?

There’s an array of common algorithms that can be applied to solve the majority of data problems. If no existing algorithm fits your problem, you can create a your own.

What you should consider is that the accuracy of the final result depends on the following:

  • Effectiveness of the chosen algorithm;
  • How you apply it;
  • And how much useful data you have.

These are the three cornerstones that will kickstart a successful implementation of algorithm.

We hope our post has helped you understand AI and ML better, and that your project will be on the lists of essential machine learning examples in the future.

In the next posts, we will look at deep learning and outline how to ensure an impactful and full-fledged implementation of AI-based solutions in the enterprise.

If you can’t postpone the implementation of your machine learning software or other AI-based software, get in touch with us. We will help you create a comprehensive solution, from algorithm to implementation.

The post Your Concise Guide: What is Artificial Intelligence & Machine Learning appeared first on Intellectsoft Blog.

]]>
https://www.intellectsoft.net/blog/artificial-intelligence-and-machine-learning-concise-guide/feed/ 0
AI, Enterprise Software: Benefits for Business https://www.intellectsoft.net/blog/ai-enterprise-software-companies-benefit-technology/ https://www.intellectsoft.net/blog/ai-enterprise-software-companies-benefit-technology/#respond Thu, 30 Mar 2017 14:42:11 +0000 https://www.intellectsoft.net/blog/?p=3922 How enterprise software AI brings significant benefits to the enterprise and the clients.

The post AI, Enterprise Software: Benefits for Business appeared first on Intellectsoft Blog.

]]>
Solutions based on Artificial Intelligence (AI) are rapidly entering the business environment as one more kind of enterprise software. According to a report, 82% of respondents said their organization will be using AI in 2017, with 11% stating they were strongly considering deploying the technology. Business embraces AI-based solutions more confidently and quickly than AR and VR, as it hinges on algorithms that aim for predictability, automation, and constant self-learning. Therefore, an increasing number of companies across all industries welcome the technology into their enterprise with arms outstretched — from country-wide organizations like the NHS to retail giant Alibaba.

AI-based Enterprise Software in Healthcare — NHS

The healthcare industry heavily relies on technology, and software is not an exception — enterprise mobile trends are as important to the health sector as they are to any other major industry.

Case in point: British public health services organization, the NHS (National Health Services), is currently testing an AI-based chatbot on 111 non-emergency hotlines. The trial period started in January 2017 and will go on for six months. The enterprise mobile service is being offered to 1.2 million North Londoners.

Users download the app and send their symptoms in text messages. In turn, the AI-based chatbot consults a vast medical database, delivering tailored responses as of the user’s case. In the process, AI-based algorithms also analyse the urgency of a situation: if it’s high, the bot will tell the patient to schedule an appointment or ask them to go to the Accident & Emergency room.

machine learning chatbots

Benefits of Enterprise Software AI

This healthcare software is a major effort to merge enterprise mobile solutions and healthcare services to make the industry more modern and digital.

The NHS is expecting the app to reduce the working pressure through wintertime, but the organization can count on a more effective output.

Benefits for NHS include:

  • Better customer experiences: interactions with the hotline have become simpler and faster.
  • Reduced operational costs: the bot will replace many support managers and save communication costs.
  • Data gathering: the chatbot gathers information that can be used in medical statistics.

AI-based Enterprise Software in Retail — Alibaba and Didi

The collaborative power of the latest software and technologies is set to secure a bright future for the retail industry.

For example, “Scanning your face credentials” is not something you can expect to hear your smartphone tell you, but Face++, a Chinese startup valued almost $1 billion, already offers their AI-based face recognition to big companies.

The software is used in popular apps: Alipay, a mobile payment app with over 120 million users, and Didi, China’s biggest ride-hailing company. In the first case, a smartphone scans a user’s face to secure a transaction, in the second, Face++ enterprise software checks if the person behind the wheel is a legitimate Didi driver (as there are unauthorised ways to become one in the country): it requires the person that is being scanned to move their head or speak while the app is scanning them.

artificial intelligence face recognition

Looking from a different perspective, Face++ uses the technology in their office, too. This puts AI-based enterprise software into the list of enterprise mobility trends, as it will largely improve the enterprise environment. Here, machine learning algorithms scan a person’s face from 83 different angles, allowing them to perform an action.

Benefits of enterprise software AI

  • High level of enterprise mobile security and security in general
  • Faster user experience: minimum action required from a user
  • Dependability
  • Mobility of corporate data: “face credentials” can be used in any company office

AI is likely to become a highly sought after technology for many businesses, regardless of their industry. Next week we will bring you more examples of how AI is transforming the tech-market in the entertainment, client retention, and more.

If your can’t afford to postpone your AI-based solution any longer, get in touch with us. We offer end-to-end solutions, from algorithm development to implementation. 

The post AI, Enterprise Software: Benefits for Business appeared first on Intellectsoft Blog.

]]>
https://www.intellectsoft.net/blog/ai-enterprise-software-companies-benefit-technology/feed/ 0
Top 6 Most Popular Machine Learning API’s https://www.intellectsoft.net/blog/smart-applications-with-machine-learning-the-definitive-guide/ https://www.intellectsoft.net/blog/smart-applications-with-machine-learning-the-definitive-guide/#respond Thu, 23 Jun 2016 20:39:04 +0000 https://stanfy.com/?p=13359 Have you heard of machine learning? If you've ever used the internet before, your answer is probably yes.

The post Top 6 Most Popular Machine Learning API’s appeared first on Intellectsoft Blog.

]]>
Have you heard of machine learning? If you’ve ever used the internet before, your answer is probably yes. Seen as a buzzword by many, machine learning—together with big data, artificial intelligence, and virtual reality—is currently one of the most widely discussed concepts in the technology community.

Buzzword or not, machine learning has long since become something we use every day, one way or another. It’s working behind the scenes in most of our mobile apps, under the hood of most websites we visit, and is employed by the brands and service providers with whom we interact.

The ubiquity of machine learning has led to the creation of a number of ways to quickly and easily integrate it into virtually any application. On the flip side, it has become even more important to understand clearly what machine learning is, what it isn’t, and where it should be applied to achieve the best results.

We’ve also made a PDF-version of this article. Download

Simple Explanation of Machine Learning

The concept of machine learning, as it’s being used in smart applications today, is basically a way of making  highly accurate predictions of the future using data (as in Big Data) from the past. This can take various forms, from article recommendations on newspapers’ websites, to fraud detection in online stores, to sentiment analysis of interaction on social networks.

Since machine learning is often intertwined with the concept of artificial intelligence (AI), it’s also quite important to understand the difference between the two. In fact, machine learning is a subset of artificial intelligence, which, in turn, is the concept that a computer can learn to do anything a human can.

machine learning and AI

As opposed to AI, machine learning algorithms don’t necessarily act by itself, but rather works with data and supplies certain conclusions and predictions based on its analysis. The core of machine learning consists of a wide variety of algorithms, which should be chosen according to the nature of the task at hand and “trained” with data from the past.

75 and 25 percent dependance

This past data is also crucial for validating  the predictions: normally 75% of the available data would be used for prediction, which would then be compared to the remaining 25%. This way, one can fine-tune the algorithms by trying to “predict” the past.

Time to act

There’s been no better time to start implementing machine learning in all kinds of applications, Google’s Aparna Chennapragada argued in her talk at the recent TNW Europe conference. The evolution of the internet has meant that data has reached a different order of magnitude to what it was ten years ago, with videos, photos, social media posts, and online behaviour data generated by billions of users.

At the same time, most people now have ambient access to enormous computing power. The mobile paradigm changes and enhances the way we interact with our environment—thanks, in no small measure, to machine learning and artificial intelligence algorithms. With huge amounts of data and so many computers at hand, now is the best time to build smart applications, which can help users achieve more and make their everyday lives easier.

One good thing about building smart applications with machine learning is that almost every problem today can be solved with software. These new solutions include entertainment at home, transportation and dispatching cars through traffic, monitoring health and fitness, and so on.

“If you’re building a product now that solves a real world problem, chances are you will need to work with information problems, and you’ll have to figure out how to get it right,” Chennapragada said.

With the wide variety of information problems we’re facing, it’s important to recognise  those cases where machine learning would be an efficient solution. The perfect problem in this case would be one that’s hard for a human to solve, but easy for the machine.

A good example is machine translation. Despite not always being 100% accurate, it satisfies users in most cases, since without the machine they wouldn’t be able to translate at all. On the other hand, inaccurate speech recognition is quite frustrating, as the user can actually type the query manually. The easier the problem is for a human, the more difficult it becomes to satisfy your users with a machine learning-based solution.

Werner Vogels, the CTO of Amazon, gave another good example of the use of machine learning in his talk at TNW Conference. The online retail behemoth uses algorithms to fight the paradox of choice, which is that after a certain number of options, the increase in choice actually makes customers less happy.

“If there are only three kinds of toothpaste, you choose one and you’re happy with your choice,” Vogels said. “But if there are 50, you get confused. You need to make a choice, but you’re not sure you’ve chosen the best one. It’s a big driver for unhappiness.

“This is important in grocery stores, but even more important online: we have not 50, but 5,000 toothpastes. Personalisation with machine learning helps reduce choices and make users happier.”

The secret sauce

Personalisation is another requirement that Chennapragada listed amongst the most important features of a smart application. Although there’s no universal formula for a successful product in this field, there is something that the app creators should keep in mind for reference:

AI-plus-I

Artificial intelligence and user interface need little explanation, and “I” means that the product has to adapt to the needs of every user in every scenario. Knowing your audience and their needs is key, and a failure to fulfil these needs  inevitably leads to the product being abandoned.

In addition to knowing the audience, it’s important to be able to show your users the benefits of your smart application early on, Chennapragada added. A good example of that is Google-owned Waze that uses machine learning and real-time data to predict traffic jams and navigate the user around them. The app adds value from the very first time it’s launched on the phone—and that’s something worth aiming for.

A big part of Chennapragada’s advice on building smart applications with machine learning algorithms can be boiled down to “don’t frighten your user” and “don’t screw up in mission critical tasks.” In particular, customers would generally prefer having a predictable interface over saving some time; for example, Google’s team at some point came up with an algorithm that could dynamically change the icon grid on the smartphone to highlight the apps that the customer was more likely to need at any given moment. The feature never went live.

Overall, the rule of thumb when it comes to building UI for smart apps is that its simplicity should be proportionate to your confidence in AI. Whenever you’re not entirely sure that the AI has got your user’s intention correctly, let the latter clarify it further. The more training your algorithms receive this way, the better they become in the future.

Starting up

With machine learning gaining popularity across industries, it is slowly but surely becoming as ubiquitous as computing itself. In fact, you don’t even need to have an in-depth knowledge of coding or computer science to introduce it into your app. With machine learning tools released by a number of companies it can be done within an hour, provided you have sufficient data to start with.

Recently we ran an overview of APIs for natural language processing, most of which employ machine learning algorithms. Let’s take a look at some of the most popular solutions you can use for other aspects of your new smart applications.

Amazon Machine Learning

Adding a data source for a model on Amazon ML

The retail giant offers a cloud-based machine learning service that it claims is similar to what is used within the company for all sorts of predictions. Examples of possible uses for this product are fraud detection, content personalisation, document classification, and customer churn prediction.

The API comes with SDKs for Java, .NET, Python, PHP, Node.js, and Ruby. Amazon provides ample documentation for the product, which could, nevertheless be slightly  difficult for a non-programmer to follow in some cases. In combination with the Amazon Web Services (AWS) cloud, the solution can scale to “generate billions of predictions daily, and serve those predictions in real-time and at high throughput.”

Amazon Machine Learning API charges $0.42 per hour of computing while analysing your data and building models. After that, you’ll need to pay $0.10 per 1,000 predictions if purchased in a batch, or $0.0001 per real-time prediction. The API is not available on the AWS free tier, which means that you might need to pay for it separately.

Microsoft Azure ML

Another offering crafted by a major tech corporation, Azure ML is part of a bigger Cortana Intelligence Suite. Similarly to Amazon’s product, it offers a visual interface that makes it easier to build and train models, and choose algorithms to use.

The product is centred around the Machine Learning Studio, a framework for creation of modular solutions of your own. In addition to documentation, there’s a sizeable gallery of all kinds of examples and experiments that show different use cases for the service. These include demand estimation, credit risk prediction, sentiment analysis, and hundreds of other applications.

As for the pricing, Microsoft charges a flat monthly fee of $9.99 per workplace—so-called “seat subscription”—as well as additional fees of $1 per hour of experimenting in Studio, $2 per hour of computation with production API, and $0.50 per 1,000 production API transactions.

Google Cloud Prediction API

Like its  competitors, Google offers a cloud-based machine learning solution for everyone to play with. Among possible use cases are customer sentiment analysis, spam detection, recommendation systems, and more. There are a few comprehensive how-to guides on creating basic models for these cases that will help you get started.

In addition to the how-tos, Google offers a quite simple developer’s guide to the API, which goes through syntax and generally explains how the whole thing works.

Prediction API offers customers a free quota of 100 predictions, 5MB of trained data, and 100 streaming updates per day—with a lifetime cap of 20,000 predictions. In the paid tier, you will be charged $10 a month per Google Cloud Platform Console project, which will give you 10,000 predictions and streaming updates. Beyond those, you’ll pay $0.50 per 1,000 predictions, $0.05 per 1,000 streaming updates, and $0.002/MB of data trained.

BigML

BigML model dashboard

Catering to all kinds of users, from hobbyist developers to enterprises, BigML offers a flexible and well documented machine learning API. Its use cases include market basket analysis, customer segmentation, and predictive maintenance.

For users not closely familiar with machine learning, there’s a small gallery of user scripts to help you get started. The documentation includes a comprehensive quick start manual that guides the user through the main steps, from feeding the data into the algorithm, to creating a model, to requesting a prediction.

BigML offers a choice of subscriptions priced from $30 to $300 a month, as well as the pay-as-you-go model where you’re charged per MB of training data, model size, and number of predictions.

In addition to its cloud-based solution, BigML offers the  possibility of deploying the machine learning system on-premise for enterprise customers.

PredictionIO

Algorithm selection dialog in PredictionIO

Apparently addressing the audience of developers who would like to be in control of their machine learning deployment, PredictionIO is an open source server that allows users to quickly build predictive engines. The company was acquired by Salesforce in February, but promised that the technology would remain free and open source indefinitely.

PredictionIO offers SDKs for Java (and Android), PHP, Python, and Ruby. There are also a few community-powered SDKs, including ones for Node.js, C#/.NET, Swift, and more. It also hosts a gallery of engine templates for different kinds of tasks, which would help get your app up and running in no time.

AlchemyAPI

AlchemyAPI language analysis example

Similarly to PredictionIO, AlchemyAPI began as a startup, but was later acquired by a large corporate. In 2015, IBM purchased the company, planning to integrate its solutions into the Watson platform.

The original API, however, is still available and can be used together with IBM’s cloud platform Bluemix. We mentioned the Alchemy Language API in the recent natural language processing round-up, but there’s more to it than just this one use case.

AlchemyAPI offers a machine learning based Visual Recognition API, as well as the recently launched AlchemyData News API. The latter provides an interesting solution for media analysis:

“AlchemyData News indexes 250k to 300k English language news and blog articles every day with historical search available for the past 60 days. You can query the News API directly with no need to acquire, enrich and store the data yourself – enabling you to go beyond simple keyword-based searches.”

The Visual Recognition API allows users to classify 250 images per day and train a custom classifier using 1000 images for free. Beyond that, users will be charged pay-as-you-go fees for face recognition, image classification, etc. Some of the features will be free to use until July 1, so hurry up if you’d like to play with the API and decide if it’s for you.
[custom_form form=”form-inline-subscribe” topic=”Product”]

Summary

With the growing number of reasonably priced and easy-to-set-up solutions, it’s simpler than ever before for app developers to find something that fits their needs perfectly. Following the advice of Werner Vogels, we’ve only mentioned a few well-tried options, which represent a mere fraction of the wide variety of products available.

If you’ve added machine learning into your application with any of the APIs we mentioned above, or using an alternative, please share your hands-on experience in the comments section.

The post Top 6 Most Popular Machine Learning API’s appeared first on Intellectsoft Blog.

]]>
https://www.intellectsoft.net/blog/smart-applications-with-machine-learning-the-definitive-guide/feed/ 0
Machine Learning In Mobile Devices: What To Expect https://www.intellectsoft.net/blog/machine-learning-in-mobile-devices-what-to-expect/ https://www.intellectsoft.net/blog/machine-learning-in-mobile-devices-what-to-expect/#comments Mon, 01 Feb 2016 08:05:44 +0000 https://stanfy.com/?p=11796 Capabilities of mobile devices we're carrying every day have been increasing rapidly over the past few years, and this process doesn't seem to be slowing down.

The post Machine Learning In Mobile Devices: What To Expect appeared first on Intellectsoft Blog.

]]>
Capabilities of mobile devices we’re carrying every day have been increasing rapidly over the past few years, and this process doesn’t seem to be slowing down. One one hand, today’s smartphones have unbelievable computing power; on the other hand, there’s a number of ways for them to collect huge amounts of data about us. Combined in a right way, these two aspects can create a countless realm of possibilities for the new generation of apps and services.

The key to the future of the mobile experience appears to be machine learning, arguably the most popular part of Artificial Intelligence studies. The most widely used definition of the concept is that by Arthur Samuel, who described it as a “field of study that gives computers the ability to learn without being explicitly programmed.”

When applied to today’s mobile devices, this definition means that machine learning algorithms could be able to process, find patterns in and make sense of the data gathered by various sensors in order to significantly change and improve the way we interact with our gadgets. Actually, this is already happening—think Google Now, or Siri, or Swiftkey,—but the future prospects of machine learning on mobile devices are indeed breathtaking.

Let’s have a glimpse of what to expect in the near future.

Talk To Your Phone

Although voice recognition and natural language processing are already being used in a number of smartphone applications like Siri or Google Now, user experience with these is still far from perfect. With the development of machine learning algorithms, our phones—or tablets, or smart watches—will become able to not only understand our input with 100 percent accuracy, but also realise the context of it.

One example of working in context is Now on Tap, Google’s service built on top of Google Now, which understands what you’re doing on the phone and processes your query using this knowledge. Here’s a great demo of the feature, in which the user asks “Who is the lead singer?” while listening to a song by Twenty One Pilots.

The context, however, could be much more wider than the phone itself. Using data from cameras, GPS, and accelerometer, the phone could be able to understand you much better in the future.

Machine learning technologies are also key for developing automated translation services, including real-time spoken translation like that Microsoft is trialing in Skype. Imagine being able to understand everyone and everything around you in any country on Earth—that’s something to look forward to, isn’t it?

Ubiquitous Fitness

Apps and devices that help you achieve your fitness goals are numerous across the platforms and activity types, however there’s still a lot of space for improvement here.

One of the main ways machine learning can revolutionise the experience with fitness apps in is continuous tracking of activities without any need of user input. Current apps mostly track steps and sometimes heart rate continuously, while you’d need to change the settings when going for a run or a swim.

With advanced machine learning algorithms that have access to data from numerous sensors, it all might come to a point when your phone will faultlessly determine any activity you’re doing at any given time without any additional instructions from your side.

User Authentication

One of the most futuristically-looking forecasts about machine learning technologies in the mobile space is the ways your smartphone could be able to confirm your identity. Passwords and PIN codes are becoming obsolete with integration of fingerprint scanners, however there are more ways to authenticate user without it.

The obvious way is face recognition, which, again, is a process based on machine learning techniques. It can be used not only for user authentication but also to improve context for other apps by understanding, for example, who’s on the recent photos made by the phone’s owner.

Another interesting way to make sure that the phone is being used by its owner is to analyse accelerometer patterns, i.e. the way the user holds the phone and uses it. Additionally, typing patterns can be used for this purpose as well, as it’s already done on desktop by Coursera.

Looking Forward

While anticipating the revolution in mobile experience brought by advanced machine learning techniques coupled with sensor data, it’s useful to remember that this future is only possible if an adequate level of data security has been reached. The more information about us is being collected by our phones and other devices, the more important it is to protect it.

However, this definitely isn’t a reason to avoid using the new features of our gadgets that require more information: it’s all about being aware of how it’s used and whether it’s safe with companies and service providers you’ve shared it with.

For app developers including ourselves, this means that security and data protection should always be the main priority in the development process.

Not sure on how to best build your next-gen mobile application? Talk to us today!

The post Machine Learning In Mobile Devices: What To Expect appeared first on Intellectsoft Blog.

]]>
https://www.intellectsoft.net/blog/machine-learning-in-mobile-devices-what-to-expect/feed/ 1
Full AI: When Machine Matches Human’s Ability To Think https://www.intellectsoft.net/blog/full-ai-when-machine-matches-humans-ability-to-think/ https://www.intellectsoft.net/blog/full-ai-when-machine-matches-humans-ability-to-think/#respond Thu, 21 Jan 2016 17:23:28 +0000 https://stanfy.com/?p=11725 As follows from the name, this AI would be fully capable of performing any intellectual tasks that a human being can perform, including reasoning, learning, communicating in natural language, and so on.

The post Full AI: When Machine Matches Human’s Ability To Think appeared first on Intellectsoft Blog.

]]>
We’re certainly living in an age of machine learning and artificial intelligence. Some of the most well-known innovative startups and corporations, from Uber with its city simulations to Apple with Siri, rely on these technologies in order to improve their services and become as efficient as possible. Applied to virtually any industry, AI techniques can bring increased flexibility and efficiency, first of all through data analysis and automation of jobs that used to be monopolised by humans.

There is, however, another kind of AI that could change the whole game in many unpredictable ways. Although it’s still not quite there, a number of companies are working on what’s known as “full AI,” or “artificial general intelligence” (AGI). As follows from the name, this AI would be fully capable of performing any intellectual tasks that a human being can perform, including reasoning, learning, communicating in natural language, and so on.

Full AI is arguably even more important to be aware of for today’s businesses than “narrow AI” that can only be applied for a limited range of tasks. When (and if) created, AGI could make a great many technologies obsolete, while at the same time creating a huge playing field for new ideas.

Players To Watch

Currently, there are a few notable companies working on the full AI. The biggest and most well-known is the UK-based DeepMind, which was acquired by Google in early 2014 for a reported $400 million. The company, which employs more than 100 researchers, uses deep learning algorithms to build a universal intelligent system.

“We use the reinforcement learning architecture, which is largely a design approach to characterise the way we develop our systems,” the company’s co-founder Mustafa Suleyman explained to TechWorld. “This begins with an agent which has a goal or policy that governs the way it interacts with some environment. This environment could be a small physics domain, it could be a trading environment, it could be a real world robotics environment or it could be a Atari environment.”

Using Atari games is a clever idea DeepMind has come up with to test its AI. The interesting thing is that the system receives raw pixel input and is able to trigger action buttons, but initially it has no idea what the buttons do and how the game is played. It’s only told to maximize the score on the screen. So, it learns on the go, in the same way a human being would.

Here’s how it goes with the iconic Breakout game:

DeepMind is not the only company that uses game environment to test and adjust its artificial general intelligence builds. Another notable player on the field is GoodAI, which is a spin-off of the Czech-based gamedev company Keen Software House.

GoodAI was launched last year by Keen Software House’s founder Marek Rosa, how invested $10 million of his own money in the initiative. The company employs about 30 AI specialists and also uses games to test what it does:

GoodAI has also found a way to benefit from the fact that its sister company, Keen Software House, is the developer of Space Engineers, a popular open world game.

“We’re [applying] this AI to Space Engineers for pure practical reasons,” Rosa told The Next Web. “This AI needs an environment where it will be operating, and the game seems to be a good environment for ‘childhood’ stages of the AI. It can’t do any harm in games, it can’t lead to financial losses or harm people. Any mistake it makes are really low-cost.”

GoodAI has created a tool called Brain Simulator, which it released to Space Engineers players to allow them to create AIs for in-game purposes. This way Rosa hopes to increase the rate the AI is learning at, with millions of additional “teachers” all over the world.

There are also some other, less known AGI developers like the US-based Maxima, who work on the same goals with more or less similar results—that is, the results that are made public.

How Long To Wait

It’s safe to assume that certain players might have already moved far beyond playing Breakout on Atari, but there’s still little chance that we’ll see a working full AI in the near future. Those making forecasts on the exact timeframe seem to get more optimistic over time, though.

In a job posting, GoodAI describes itself as a “long-term (10+ years) privately-funded R&D project,” which kind of gives a hint about the founder’s idea of how much time is needed to get closer to the startup’s main goal.

In 2011, a survey was conducted at the Future of Humanity Institute (FHI) Winter Intelligence conference to see what the scientists think of the probability of general AI appearing in the future. The results were that “machines will achieve human-level intelligence by 2028 (median estimate: 10% chance), by 2050 (median estimate: 50% chance), or by 2150 (median estimate: 90% chance).”

Yet another forecast was made in 2014 by Murray Shanahan, professor of cognitive robots at Imperial College London, who works on deep learning techniques. In his opinion, “within the next 30 years I would say better than fifty-fifty that we’ll achieve human-level AI.”

Arguably the most significant challenge for AGI research is that no one appears to know for sure how exactly it’s supposed to look or work. Finding the most efficient system architecture and the right way to teach the machine to think is also something done mostly by trial and error.

“I think that the biggest obstacle is [to create] neural network modules that can learn and represent the data from the environment, do the generalizations and predictions while being not too demanding on computational resources,” GoodAI’s Rosa said. “Nobody has solved this yet; there is some good progress, but it’s still not enough. We need to overcome this obstacle, and then I believe things will go much faster.”

Academic researchers mention two main issues that slow the AGI development down. First, it’s a lack of tasks and environments to properly assess the AI, which would confront all the requirements for AGI at once. The other issue, somewhat related to the first one, is that the existence of an architecture that satisfies a certain subset of the requirements doesn’t mean that it can scale to achieve all of them.

In addition to that, there’s the ethical aspect to full AI research. A list of concerns could look like this:

  • An AGI could develop hostile attitudes or thoughts against humans.
  • A minority could develop an AGI with the goal to fulfill their own interests (conflict with general human interests).
  • An AGI could take ethical rules too seriously (or extend them in its own way) and therefore do dangerous actions as for example harming people.
  • You could never trust an AGI until your mental abilities would be at a comparable level (the AGI could lie to you and you wouldn’t even have the possibility to recognize that).
  • An AGI could create a new race with further AGIs just to be in an appropriate society: mankind would be eclipsed.

Fortunately, we apparently have more than enough time to solve these.

All in all, full AI does not seem an immediate threat to the existing business models and the way everything works, although it definitely is something we should be aware of and checking on every now and then. It is also an extraordinarily challenging and exciting thing to think about, which will get even more so as it gets closer.

 

The post Full AI: When Machine Matches Human’s Ability To Think appeared first on Intellectsoft Blog.

]]>
https://www.intellectsoft.net/blog/full-ai-when-machine-matches-humans-ability-to-think/feed/ 0