Thursday, October 31, 2019

Download E-Books Free on Engineering and Other Subjects

There are many web portals which provide the E-Books in free distribution for the students and research scholars.

Following are the web links from where the E-Books of different subjects can be downloaded
  • http://www.allitebooks.in/
  • https://www.pdfdrive.com/




Tips to Write Literature Review Quickly

Literature Review can be prepared from research papers without reading the complete paper from beginning to end. The extracts and main key points of the research paper including its keywords can be used to write the statements for the literature review.



Sample Literature Review of a Paper without reading complete paper from beginning to end
Wu et al. (2019) 
The work in this paper is focusing on fatty liver disease prediction using classification model in machine learning with specific implementations of Random Forest Approach. The methods in this paper include the screening of datasets from New Taipei City Hospital with the sample size of 577. The methods in this paper included the segments of Study Population, Clinical Data and Outcomes, Data Preprocessing, Variable Selection, Model Building and many others. The limitations or research gaps in the research manuscript is focusing on the single dataset from one medical centers. It can be elevated to multiple datasets from multiple locations or regions. The data set is small and it can be escalated.

Wu, C.C., Yeh, W.C., Hsu, W.D., Islam, M.M., Nguyen, P.A.A., Poly, T.N., Wang, Y.C., Yang, H.C. and Li, Y.C.J., 2019. Prediction of fatty liver disease using machine learning algorithms. Computer methods and programs in biomedicine, 170, pp.23-29.

Secure your Webcams and CCTVs

In current scenario, it is very common trend to install the Webcams or CCTVs in Homes or Offices for the security and overall privacy. By simply installation of CCTV or Webcams, the security and privacy cannot be integrated.

These Webcams and CCTVs are indexed on the Internet of Things (IoT) based Search Engines which can be viewed by anybody on Internet. Shodan is one of the prominent IoT Search Engine.

We should be aware that if the passwords are not implemented in secured mechanism, the CCTVs and Webcams can be accessed by anybody. In this way, it will not be security rather it will be the invitation to the criminals.

Tips to Write Abstract in the Research Paper for Conferences and Journals

In a Research Paper, there are so many different components. Out of many modules of a research paper, the Abstract is very important point. It should be written very carefully. The Abstract should include the Data Analysis and Extracts of the Research Reports as shown in the Figure


Wednesday, October 30, 2019

Effective Abstract Writing for Research Papers and Conferences

Abstract is the first and one of the main components in a research paper or article. The Abstract of research paper should be written very carefully and some data analytics and reports should be included to give good impression. By this way, the Abstract gets more chances of acceptance. In most of the good conferences, first of all the Abstract is called and after that Full Paper.

It is generally seen that the authors start the Abstract with very general statements or the definitions. The abstract should not include the definitions or generic statements.

In Abstract, the real time data should be added so that the abstract can be accepted in first instance. From prominent research portals including statista.com, data.gov.in and many others the data analytics can be used.


Tips To Write Quick Literature Review

The literature review can be written quickly using some tips. Using these, the research paper is read very quickly and in parallel the text is written so that it can be placed in the Literature Review.

Following are the tips to write the literature review:
  • Focus on the terms in Abstract like "In this work", "In this paper", "Challenges". These terms give the clear idea about the theme of paper
  • The keywords are prepared from the Title and Abstract. The sentences from the keywords can be written very easily
    • For Example: If the keywords are "machine learning, classification, random forest approach".
    • A sentence can be created like "In this manuscript, the problem formulation is done with the focus on random forest based approach for classification that is the key domain of machine learning"
  • Write the headings of the paper separately and then the sentence formation is done
By these tips and tricks, the literature review can be prepared very easily and rapidly




Polyglot: Open Source Platform for Data Science and High Performance Computing by Netflix

Netflix released Polynote under Open Source Distribution for Data Scientists and Researchers. Polynote is the platform with powers of machine learning and artificial intelligence integrated data science.

Polyglot Notebook is having the high performance support of Scala.

https://polynote.org/

It is a web based tool with support of Python Programming and Big Data Analytics based tools.

Monday, October 28, 2019

Semantic Web: Deep Mining of Search Queries with Accurate Interpretation

From the data analytics reports of InternetLiveStats.com, Google processes more than 40,000 search queries per second. Around 20% queries on Google are those which are new everyday and never entered before on the search engine. The major challenge with the search engines is to get the accurate results without irrelevant outcomes.

Semantic Web in broad terms refers to the Web with Meaning using Interconnections. The web in semantic web is able to describe things in a way that computers can understand the actual meaning which reside in the search query or browsing behavior. 

The wonderful powers of semantic web can be seen on the prominent website like Skyscanner.net and Trivago.com. These portals compare the real time prices from different services providing portals and give the best results in form of comparison. The back-end libraries of these portals communicate with different websites and then fetch the results so that the users can see the minimum price of hotels or flights. The protocols of semantic web work with these websites to fetch the related information from multiple locations.

Whenever the information about specific service, product, company or object is required, the users navigate different search engines so that the related website can be fetched and particular information can be found. Now, the major issues with the traditional search engines are that the information may be scattered and irrelevant.

For example, if a user enters a search keyword “gold today” on the search engine, there are many haphazard results which may include the following aspects

  • Gold Price
  • Gold Jewellers
  • Star Gold
  • Gold Loan

If the user is fond of watching Movies on Star Gold, then the semantic web search engine should give all results of “Star Gold” on searching of “Gold”.

Open Source Tools and Libraries for Semantic Web

  • Apache Jena: http://jena.apache.org/
  • Apache TinkerPop: https://tinkerpop.apache.org
  • D2R: http://d2rq.org/d2r-server
  • Linked Media Framework: https://code.google.com/p/lmf/
  • Open Semantic Framework: http://opensemanticframework.org/
  • Paget: http://code.google.com/p/paget/
  • Protégé: https://protege.stanford.edu
  • RDFLib: https://github.com/RDFLib/rdflib
  • Semantic Media: http://semantic-mediawiki.org/wiki/Semantic_MediaWiki
  • Sesame: http://www.openrdf.org/
    • and many others

Parrot O.S: A High Performance Linux Distribution for Digital Forensics

Parrot Linux https://www.parrotlinux.org is one of the powerful operating system developed for the penetration testing, vulnerability analytics, computer and digital forensics. It is getting huge fame as compared to Kali Linux as the Anonymous Web Browsing as available in this operating system.

Initially released in year 2013, Parrot OS is getting updates very frequently with the new tools for the cyber forensics professionals.

NLP based New Search Algorithm BERT by Google

Google updated its Search Engine with the new algorithm Bidirectional Encoder Representations from Transformers (BERT) on 25 October 2019. The new algorithm BERT is capable to understand the inner words in the sentences which are searched by the users.

BERT is a Deep Learning Algorithm having powers of Natural Language Processing (NLP).

Using NLP based search, Google is able to provide search results with the Semantic capabilities.

Sunday, October 27, 2019

CupCarbon Simulator for Smart City Scenarios in Internet of Things (IoT)

In today’s world, a huge range of devices are interconnected with the wireless technologies which gave the dawn to the state-of-the-art technology of Internet of Things (IoT). A number of smart gadgets and machines are now monitored and controlled using IoT protocols. The technologies of IoT are now spread to the entire world by which there is all time connectivity in the devices connected using IoT.

From the research reports of Statista.com, The sale of smart home devices elevated from 1.3 billion dollars to 4.5 billion dollars from year 2016 to year 2019 in the United States. As per the news from Economics Times, there will be around 2 billion units of eSIM based devices by year 2025. With the use of eSIM, the subscribers can use the digital SIM card for the smart devices and the services can be activated without need of the physical SIM card. It is one of the recent and secured applications of Internet of Things (IoT).

Beyond the traditional applications, IoT is under research for the environment monitoring and prior notifications to the regulating agencies so that the appropriate actions can be taken. The reports from LiveMint.com underline that Indian Institute of Technology (IIT) and Ericsson getting associated for handling the air pollution in Delhi. As per the news report by Grand View Research Inc., the global NB-IoT market size is presented to touch more than 6,000 million dollars by year 2025. NB-IoT refers to the radio technology standard with low-power wide-area network (LPWAN) so that the huge coverage of smart devices can be done with higher degree of performance in the connectivity.

A wide range of simulators and frameworks are available to simulate the scenarios of Internet of Things (IoT) in Free and Open Source distribution. These libraries and simulators can be used for research and development so that the performance of different algorithms of smart cities and IoT can be analyzed. To work with any research project for smart city, it is required to simulate it so that the prior behavior can be evaluated on multiple parameters before launching the actual project of IoT enabled smart city.

CupCarbon: http://www.cupcarbon.com

CupCarbon is the prominent and multi featured simulator that is used for the simulation of smart cities and IoT based advanced wireless networks scenarios. CupCarbon provides the effective Graphical User Interface (GUI) for the integration of objects in the smart city and wireless sensors. In addition, the CupCarbon simulator is having the Senscript Editor in which the programming of sensor nodes and algorithms can be done. SenScript is the script that is used for the programming and control of sensors used in the simulation environment. In SenScript, a number of programming constructs and modules can be used so that the smart city environment can be simulated. CupCarbon is having a SenScript Editor in which the SenScript code is placed and executed by the developer.

The working environment of CupCarbon is having enormous options to create and program the sensors of different types. At the middle, there is a Map View, in which the smart city under simulation can be viewed dynamically.

The sensors and smarts objects are displayed in the map view. To program these smart devices and traffic objects, the Toolbar of CupCarbon provides the programming modules so that the behavior of each and every object can be controlled and programmed.

Any number of nodes or motes can be imported in CupCarbon for the programming with the random positions. In addition, the weather conditions and environmental factors can be added so that the smart city project can be simulated under specific environmental temperature. Using this option, the performance of smart city implementation can be done under different situations with varying city temperatures.

The SenScript Editor provides the programming editor so that the functions and methods with each sensor or smart device can be executed. SenScript Editor is having a wide range of inbuilt functions which can be called. These functions can be attached with the sensors and smart objects in the CupCarbon simulator.

The markers and routes provide the traffic path for the vehicles in the smart city. By this approach, the vehicles can follow the shortest path from source to destination with the consideration of congestion or traffic jams. Similar implementations are followed by the online app based taxi or cab services in India as well as other countries.

On execution of the code written in SenScript, the animated view of smart city is visualized with the mobility of vehicles, persons and traffic objects. This view enables the development team to check whether there is any probability of congestion or loss of performance. By this visualization, the improvements in the algorithm and associated code of SenScript can be done so that the proposed implementation can provide higher degree of performance and minimum resources.

In CupCarbon, the simulation scenario can be viewed like Google Map including Satellite View. It can be changed to Satellite View in a single click. Using these options, the traffic, roads, towers, vehicles and other objects can be visualized along with the congestion in the simulation and real time environment can be felt.

On running and visualization of the Smart City scenario using CupCarbon is always required to analyze the performance of the smart city network to be deployed. For such evaluations of a new smart city project, the parameters like energy, power, security, integrity are others are required to be investigated. CupCarbon integrates the options for energy consumption and other parameters so that the researchers and engineers can view the expected effectiveness of the project.

The government agencies as well as the corporate giants are getting associated for the big smart city projects so that the better control on the huge infrastructure and resources can be done. The research scholars and practitioners can propose novel and effective algorithms for smart city implementations. The proposed algorithms can be simulated using smart city simulators and the performance parameters can be analyzed on different dimensions.

References
[1] CupCarbon Simulator, http://www.cupcarbon.com
[2] Economic Times, https://economictimes.indiatimes.com/tech/hardware/esim-based-devices-shipments-to-reach-2-billion-by-2025/articleshow/70040504.cms
[3] LiveMint, https://www.livemint.com/news/india/iit-kanpur-ericsson-partner-to-tackle-delhi-s-air-pollution-1564036920341.html
[4] Statista, The Statistics Portal, http://www.statista.com

High Performance Cloud Platforms for Scientific Computing Applications

Now days, the software applications as well as smart devices and gadgets are facing enormous performance issues including load balancing, turnaround time, delay, congestion, big data, parallel computations and many others. These key issues traditionally consume huge computational resources and the low configuration computers are not able to work on high performance tasks. The laptops and computers which are available in market are used as a personal computer and these systems face huge performance issues when the high performance jobs are required to be solved.

For example, a desktop computer or laptop having 3 GHz processor is able to perform approximately 3 billion computations per second. The High Performance Computing (HPC) is having focus towards solving the complex problems and working on quadrillions or trillions of computations with high speed and maximum accuracy.

The high performance computing applications are used for the domains where speed and accuracy level is quite high as compared to traditional scenarios. Because of this reason, the cost factor is very high with the deployment of high performance computing still it is required because of the sensitivity and requirements as per the application domain.

Following are the use cases and scenarios where high performance implementations are required
Nuclear Power Plants
Space Research Organizations
Oil and Gas Explorations
Artificial Intelligence and Knowledge Discovery
Machine Learning and Deep Learning
Financial Services and Digital Forensic
Geographical and Satellite Data Analytics
Bio-Informatics and Molecular Sciences

A number of cloud platforms are available on which the high performance computing applications can be launched without having access to the actual supercomputer. Using these cloud services, the billing is done on the usage basis and it costs less as compared to purchasing the actual infrastructure required for working with high performance computations.

Following are few of the prominent cloud based platforms which can be used for the advanced implementations including data science, data exploration, machine learning, deep learning, artificial intelligence and many others.

Neptune: https://neptune.ml/

Neptune is a lightweight cloud based service for high performance applications including data science, machine learning, predictive knowledge discovery, deep learning, modeling training curves and many others. Neptune can be integrated with Jupyter notebooks so that the Python programs can be easily executed for multiple applications.

The dashboard of Nepture is available at https://ui.neptune.ml/ on which multiple experiments can be done. Neptune works as a machine learning lab on which assorted algorithms can be programmed and their outcomes can be visualized. The platform provides the Software as a Service (SaaS) so that the deployment can be done on cloud. The deployments can be done on own hardware and can be mapped with the Neptune cloud.

In addition to pre-built cloud based platform, Neptune is having integration with Python and R Programming so that high performance applications can be programmed. Python and R are prominent programming environments for the data science, machine learning, deep learning, big data and many other applications.

For Python programming, Neptune provides neptune-client so that the communication with Neptune server can be done and advanced data analytics are implementable on its advanced cloud.

For integration of Nepture with R, there is an amazing and effective library "reticulate" which integrates the use of neptune-client.

The detailed documentation for integration of R and Python with Neptune are available at https://docs.neptune.ml/python-api.html and https://docs.neptune.ml/r-support.html

In addition, the integrations with MLflow and TensorBoard are available. MLflow refers to the open source platform for managing the machine learning lifecycle with the reproducibility, advanced experiments and deployments. It is having three key components including Tracking, Projects and Models. These can be programmed and controlled using Neptune MLflow integration.

The association of TensorFlow with Neptune is possible using Neptune-TensorBoard. Tensorflow is one of the powerful frameworks for the deep learning and advanced knowledge discovery approaches.

With the usage of assorted features and dimensions, the Neptune cloud can be used for the high performance research based implementations.

BigML: https://bigml.com/

BigML is a cloud based platform for the implementation of advanced algorithms with the assorted datasets. This cloud based platform is having the panel for implementation of multiple machine learning algorithms with ease.

The dashboard of BigML is having access to different datasets and algorithms under supervised and unsupervised taxonomy as shown in Figure 4. The researcher can used the algorithm from the menu as per the requirements of the research domain.

A number of tools, libraries and repositories are integrated with BigML so that the programming, collaboration and reporting can be done with higher degree of performance and minimum error levels.

The algorithms and techniques can be attached with the specific dataset for evaluation and deep analytics as shown in Figure 5. With the methodology, the researcher can work with the code as well as dataset on easier platform.

Following are the Tools and Libraries which are associated with BigML for multiple applications of high performance computing
Node-Red for Flow Diagrams
Github Repos
BigMLer as Command Line Tool
Alexa Voice Service
Zapier for Machine Learning Workflows
Google Sheets
Amazon EC2 Image PredictServer
BigMLX App for MacOS

Google Colaboratory: https://colab.research.google.com

Google Colaboratory is one of the cloud platforms for implementation of high performance computing tasks including Artificial Intelligence, Machine Learning, Deep Learning and many others. It is a cloud based service which integrates Jupyter Notebook so that Python code can be executed as per the application domain.

Google Colaboratory is available as Google App in the Google Cloud Services. It can be invoked from Google Drive as depicted in Figure 6 or directly with the URL https://colab.research.google.com.

The Jupyter notebook in Google Colaboratory is associated with CPU by default. If the hardware accelerator is required like Tensor Processing Unit (TPU) or Graphics Processing Unit (GPU), it can be activated from Notebook Settings


The dataset can be placed in Google Drive. The dataset under analysis is mapped with the code so that the script can directly perform the operations as programmed in the code. The outputs and logs are presented on the Jupyter notebook in the platform of Google Colaboratory.

Deep Cognition
URL: https://deepcognition.ai/

Deep Cognition provides the platform for implementation of advanced neural networks and deep learning models. The AutoML with Deep Cognition provides the autonomous Integrated Development Environment (IDE) so that the coding, testing and debugging of advanced models can be done.

It is having Visual Editor so that the multiple layers of different types can be programmed. The layers which can be imported are Core Layers, Hidden Layers, Convolutional Layers, Recurrent Layer, Pooling Layers and many others.

The platform provides the features to work with advanced frameworks and libraries of MXNet and TensorFlow for scientific computations and deep neural networks.

The research scholars, academicians and practitioners can work on the advanced algorithms and their implementations using cloud based platforms dedicated for high performance computations. With this type of implementation, there is no need to purchase the specific infrastructure or devices rather the supercomputing environment can be hired on cloud.