R and Python are the most important language for Data Science. You need to learn any of them perfectly. In this post, I’ll tell you what to choose based on your experience and interest.

Most of the people say that:

 If you have some programming experience, Python might be the language for you.Python’s syntax is more similar to other languages than R’s syntax is.
Well, they are more or less correct, because Python provides more library than R, and Generally R is used more for Statistical Work and we all knew that Python is more flexible than R.

But, But

If you want to be a Data Scientist, Most of them ( Who are working in this Field ) will tell you to choose R instead of Python, because in real life, As a Data Scientist you have to Clean the Data and Visualize the Data and some Clustering for the company. Here R plays a great role. As I said earlier R is known for its ability to solve Statistical Problems. So, It will be easier for you to analyze the data and to work with those data.

** You have to UP-TO-DATE yourself with the current books and software on Data Science because you don’t know, how and when the problem will arrise, and you have to solve the problem. Data Scientists also studied a lot. 

Let’s talk about the Data Science Packages:

Python’s Packages – 

NumPy introduces objects for multidimensional arrays and matrices, as well as routines that allow developers to perform advanced mathematical and statistical functions on those arrays with as little code as possible.
SciPy builds on NumPy by adding a collection of algorithms and high-level commands for manipulating and visualizing data. This package includes functions for computing integrals numerically, solving differential equations, optimization, and more.
Pandas add data structures and tools that are designed for practical data analysis in finance, statistics, social sciences, and engineering. Pandas works well with incomplete, messy, and unlabeled data, and provides tools for shaping, merging, reshaping, and slicing datasets.
IPython extends the functionality of Python’s interactive interpreter with a souped-up interactive shell that adds introspection, rich media, shell syntax, tab completion, and command history retrieval.
Matplotlib is the standard Python library for creating 2D plots and graphs. It’s pretty low-level, meaning it requires more commands to generate nice-looking graphs and figures than with some more advanced libraries.
Scrapy is an aptly named library for creating spider bots to systematically crawl the web and extract structured data like prices, contact info, and URLs. Originally designed for web scraping, Scrapy can also extract data from APIs.
NLTK is a set of libraries designed for Natural Language Processing (NLP). NLTK’s basic functions allow you to tag text, identify named entities, and display parse trees, which are like sentence diagrams that reveal parts of speech and dependencies.
Pattern combines the functionality of Scrapy and NLTK in a massive library designed to serve as an out-of-the-box solution for web mining, NLP, machine learning, and network analysis.
Seaborn is a popular visualization library that builds on matplotlib’s foundation. The first thing you’ll notice about Seaborn is that its default styles are much more sophisticated than matplotlib’s.
Bokeh makes interactive, zoomable plots in modern web browsers using JavaScript widgets. Another nice feature of Bokeh is that it comes with three levels of interface, from high-level abstractions that allow you to quickly generate complex plots, to a low-level view that offers maximum flexibility to app developers.
Basemap adds support for simple maps to matplotlib by taking matplotlib’s coordinates and applying them to more than 25 different projections.
NetworkX allows you to create and analyze graphs and networks. It’s designed to work with both standard and nonstandard data formats, which makes it especially efficient and scalable.

R’s Packages – 

sqldf is used to select from data frames using SQL.
forecast is used for easy forecasting of time series.
plyr is used for data aggregation.
stringr is used for string manipulation.
Database connection packages.
Lubridate is used for time and date manipulation.
ggplot2 is used for data visualization.
statistical quality control and QC charts.
reshape2 is used for data restructuring.
random forest predictive models.

Source – kdnuggets

Now, What to Choose?

The main issue with R is its consistency. Algorithms are provided by third parties, which makes them comparatively inconsistent. The resulting decrease in development speed comes from having to learn new ways to model data and make predictions with each new algorithm you use. Every package requires a new understanding. Inconsistency is true of the documentation as well, as R’s documentation is almost always incomplete.

However, if you find yourself in an academic setting and need a tool for data analysis, it’s hard to argue with choosing R for the task. For professional use, Python makes more sense. Python is widely used throughout the industry and, while R is becoming more popular, Python is the language more likely to enable easy collaboration. Python’s reach makes it easy to recommend not only as a general purpose and machine learning language but with its substantial R-like packages, as a data analysis tool, as well.


Both Python and R have great packages to maintain some kind of parity with the other, regardless of the problem you’re trying to solve. There are so many distributions, modules, IDEs, and algorithms for each that you really can’t go wrong with either. But if you’re looking for a flexible, extensible, multi-purpose programming language that also excels in Data Science, R is a clear choice. Most of the common tasks once associated with one program or the other are now doable in both. They are similar enough, in fact, that if most of your colleagues are already using R or Python, you should probably just pick up that language.

Spurce – qz,google.