Machine Learning From Scratch [Part 2]

This is part two of Machine Learning from Scratch. You’re about to follow a straight forward and short tutorial about plotting a technical bar chart using Python, Pyplot, and a statistics tool called Decile.

In this lesson, you’ll learn how to:

  • Work with collections library and Counter module
  • Work with bucketed lists and deciles
  • Plot bar charts at an advanced level with histograms
  • Generate a line chart (X and Y axis) from the lists
  • Generate a bar chart

We’ll keep studying data visualization with Pyplot. Visualizing data is a good part of a data scientist or machine learning engineer. The data itself is not that valuable – we must be smart enough to analyze it and display in an understandable way.

As we’ve seen in part 1, Pyplot is an easy and fast library to plot your data, but it certainly has its limitations.

Now, let’s jump straight into our next task.

Let’s now declare a list of grades that will be our data object this time and also import the Counter module from the Collections library.

from collections import Counter
grades = [83, 95, 91, 87, 70, 0, 85, 82, 100, 67, 73, 77, 0]

Also, we need to import Pyplot. Assuming that you’re using the jupyter notebook from the previous lesson, you just need to run the cell where you imported the module.

Now, let’s declare our histogram using Counter. Let’s bucket all grades by decile and put 100 with the 90s. Also, let’s print our histogram variable and check out its content.

A decile is a descriptive statistics’ concept which “is any of the nine values that divide the sorted data into ten equal parts so that each part represents 1/10 of the sample or population”.

To determine our decile from the grades, we’ll use the Counter, which is a dict subclass for counting hashable items. It returns its elements as dictionary values.

#Bucket grades by decile, but put 100 in with the 90s
histogram = Counter(min(grade // 10 * 10, 90) for grade in grades)

We want the minimum value of the iteration (grade // 10 * 10, 90). We’re using // to return only the integer of the division.

You’ve probably observed the output of our histogram:

Counter({80: 4, 90: 3, 70: 3, 0: 2, 60: 1})

That is what a decile looks like.

Now, let’s print our histogram and see what it looks like.[x + 5 for x in histogram.keys()],
       #Shift bar right by 5
       #give each bar its correct height
       #Give each bar a width of 10
       edgecolor=(0, 0, 0))

#x-axis from -5 to 105
#y-axis from 0 to 5
plt.axis([-5, 105, 0, 5])

plt.xticks([10 * i for i in range(11)])
#x-axis labels at 0, 10, ..., 100
plt.ylabel("# of Students")
plt.title("Distribution of Exam 1 Grades")
That’s how our distribution of the grades will look like

Statistics play a significant role in machine learning. Sometimes, pure statistics will satisfy your project’s objective. There is a huge discussion about whether statistics tools are machine learning or not – and that’s merely a discussion.

We should be concerned about objective goals for our machine learning projects – no matter how you call it (AI, Data Science, Statistics…). It doesn’t matter if you’re running a basic linear regression or a hardcore deep learning framework, you must deliver practical results.

By the end of this article, you’ve had more contact with Python handling data and visual demonstrations using Pyplot. In the next article (Part 3), we’ll jump into Numpy, which is widely used for numerical computing.

Machine Learning From Scratch [Part 1]

This is part one of Machine Learning from Scratch

In this lesson, you’ll learn how to:

  • Import a module from a bigger library
  • Start working with Matplotlib and Pyplot
  • Declare lists of data
  • Generate a line chart (X and Y axis) from the lists
  • Generate a bar chart

Discover the power of data by implementing machine learning algorithms in Python. Here, I’ll show you the logic behind each technique, and you are going to be able to apply machine learning in different situations.

No more talking, let’s get straight to it.

Assuming that you have Anaconda and Jupyter Notebooks installed, create a new notebook.

Let’s import the pyplot module from the library matplotlib. Pyplot is useful for generating simple charts from data. It’s not recommended for heavy-duty data visualizations – you wouldn’t use it live in a web dashboard.

#For making simple plots
from matplotlib import pyplot as plt

Now, let’s declare two lists – each one containing 7 elements. You’ll notice that their elements are corresponding. years[0] is related to gdp[0] – that’s for all lists’ elements.

years = [1950, 1960, 1970, 1980, 1990, 2000, 2010]

gdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3]

Now, using pyplot, let’s plot a line chart.

X-axis: years

Y-axis: gdp

Take a close look at plt.plot syntax. The attribute on the X-axis goes first, the Y-axis goes second. Then, you select the attributes you want:

  • color
  • marker (‘o’ means a circle as indicator in the chart)
  • linestyle
#create a line chart. Years on x-axis, gdp on y-axis

plt.plot(years, gdp, color = 'green', marker = 'o', linestyle = 'solid')

#add a title
plt.title("Nominal GDP")

Now, let’s add a title to our chart and print it right into Jupyter notebook:

#add a label to the y-axis
plt.ylabel("Billions of $")
This is the output you should see

Pyplot is a simple and fast solution to generate visualizations from data.

In business, you need to be agile. Pyplot charts may not be that good looking or interactive, but they will certainly do their job.

You don’t need to memorize each parameter for a function. For example, put your mouse cursor next to plt.plot() and press shift + tab. The docstring of the function will pop into your screen:

Here they are: all possible parameters your function might receive. If you don’t specify all of them (apart from x-axis and y-axis) the default values will be used

Now, let’s learn how to plot a bar chart.

Bar charts are useful when when you want to show how some quantity varies among some discrete set of items.

Discrete items are not continuous values – which means that they are not a progression of numbers.

We want to visualize the names and heights in meters of the tallest buildings in the world. After a quick Google search, you will come up with two lists of corresponding items: building_names and heights

building_names = ["Burj Khalifa", "Shanghai Tower", "Makkah Tower", "Ping An Financial Center"]
heights = [828, 632, 601, 555]

As you’ve declared Pyplot previously, it’s already instantiated into your Jupyter Notebook, so there’s no need to declare it again. If you’ve close this notebook, you will have to execute the import statement again.

If you type in and press shift+tab, the docstring of the function will pop into your screen:

Again, you don’t need to memorize the parameters each function receives.

To make the bar chart look good, we might want to set up that the length of each bar has the same length of the name of the building. Also, we’ll set the bars’ heights. As we are talking about a range of values, we might simply call range:, heights)

Let’s add titles to our bar chart and y-axis:

plt.title("Tallest buildings in the world") #add a title
plt.ylabel("#height in meters") # label the y-axis

To add labels to our X-axis, we’ll call xticks:

plt.xticks(range(len(building_names)), building_names) will literally show our bar chart which must look like this:

We’ve just generated a bar chart using Pyplot. Note that the titles are messy thanks to their large names. Pyplot is fast but not pixel perfect. Deal with it.

That’s good for now. I believe that short tutorials are more productive than larger ones.

On the next tutorial of Machine Learning from Scratch we’ll keep playing around with Pyplot, collections, histograms and line charts.


A Hands-On Approach to Machine Learning (part 1)

We’ll start defining some important concepts and attributes of machine learning systems, things that need to be understood in order to start coding a system. As soon as you finish reading this article, you’ll have a notion of why would you use an ML solution and what do you need to build it.

Defining Machine Learning

Machine Learning is the science of programming computers so they can learn from data. An ML-based would process raw data and transform it into training instances which are part of the training set.


Raw data: pure data, unprocessed

Training instances: processed sample data. E.g: salary, purchased or not, nationality…

Training set: a set of multiple training instances used by the system to learn autonomously, algorithmic-based.

training set
This is an example of a training set. Each line is a training instance. Note that the “Purchased” field is still unprocessed – your computer understands 0’s and 1’s, and not Yes’s and No’s

From the example above:

  • Country, Age, Salary and Purchased are data types or also attributes
  • A feature is usually an attribute plus its value (“Country” = “France”)

ML/AI systems vs. Mechanical Systems

Spam filters were one of the first practical and mainstream uses of machine learning, and they illustrate well such difference. A spam filter usually analyzes words within the email itself looking for red flags.

If you’re building a mechanical spam filter, you would have to hardcode all spam red flags. As it might be effective, it’s not so efficient since spam strategies are constantly changing.

What I’m saying is that a lot of human effort would be required to keep such a mechanical system up to date. In an ML scenario, the system would learn incrementally by itself by being fed training data (online learning, preferably).

Training what we can’t (or don’t want to) code


Coding a speech recognizer or a personal assistant like Siri or Alexa became possible thanks to machine learning. Well, they could be coded with no ML traces, but that’s the kind of work that becomes unnecessary when you have the powerful tools of ML.

Imagine if you had to hardcode all possible variations of each word and assign all of them to the corresponding letters… a huge chunk of work. Writing an algorithm that learns by itself is a better idea, given many examples for each word.

We can now conclude that ML and AI open uncountable possibilities for innovations, since their building time becomes shorter.

So, when to use machine learning?

  • Dynamic environment (ML can adapt to new data using online/batch learning)
  • Getting intel about complex problems and large amounts of data
  • Complex problems that are not so easy to code or would require a lot of human hours
  • Huge amounts of data and no known or developed algorithms


Machine Learning Systems

There are three ways to generally classify machine learning systems or algorithms:

  • Whether they are trained or not under human supervision (supervised, unsupervised, semisupervised or Reinforcement Learning)
  • Whether they can learn incrementally while running (online or batch learning)
  • Whether they compare new data points to known data points, or detect patterns in the new training data and build predictive models (instance based or model-based learning)

How systems are trained

Supervised Learning

The training data fed to the algorithms includes the desired solutions, called labels. Therefore, every training instance will contain a label.

Classification and Regression are typical supervised learning tasks:

  • Classification will set instances into different groups
  • Regression (or prediction) will predict values or actions by learning from predictors and their labels

Most important supervised learning algorithms:

  • Neural Networks (which can also be unsupervised)
  • Decision Trees
  • Random Forests
  • Linear Regression
  • Logistic Regression

Unsupervised Learning

The training data is unlabeled. The system learns by itself through data interpretation. Therefore, an unlabeled training set. There are three general uses for unsupervised learning:

  • Clustering
  • Association rule learning
  • Visualization and dimensionality reduction

Clustering will divide instances into clusters – which are groups that share traits in common.

Dimensionality reduction has the goal of simplifying the data without losing too much information. For instance, the price of a house might be correlated with its location so the dimensionality reduction algorithm will merge them into one feature. Feature extraction is the name of this technique. This helps performance in a very considerable way.

Anomaly detection is also a task for unsupervised learning, like credit card fraud detection.

Semisupervised Learning

Combination of both supervised and unsupervised algorithms – a portion of the data is labeled, but the other is not. Usually, the system will identify patterns or will cluster the data and then the programmer needs to insert labels to each pattern or cluster.

Deep Belief Networks (DBNs) are based on Restricted Boltzmann Machine (RBM), which is an unsupervised learning component. RBMs are trained through unsupervised learning, and then the system is fine-tuned using supervised learning techniques (insertion of labels).

Facial recognition is a good example of semisupervised learning: the system by itself will identify that the person is there and, depending on the system, will also identify their physical attributes (hair and eye color, skin tone, shapes…), and then the instance will be fed with the person’s name and the necessary information.

Reinforcement Learning

The learning system is an agent in reinforcement learning. This agent will observe the environment and perform actions to receive rewards or penalties. It will learn by itself what’s the best strategy – called policy – to be rewarded more often. A policy defines what action the agent should choose in a given situation.

Reinforcement Learning is commonly used in robots with higher degrees of freedom, like walking, picking objects and opening doors!

Learning incrementally or not?

  • Batch Learning: the system doesn’t learn incrementally, so it must be trained using all available data at once, typically done offline. When the system is trained, it goes into action and doesn’t learn anymore. If the system needs to learn from new data, it must be stopped and replaced with a new system trained with the new data. As it might take a long time, batch learning systems can be automated and suited for dynamic use.
  • Online/Incremental Learning: the systems learn incrementally by feeding it continuous data instances (grouped in batches). Works great with data that changes a lot.

The Learning Rate is something that needs to be set when working with online learning systems. It’s a rate that defines how fast the algorithms should adapt to new data. Although a high learning rate will increase adaption to new data, the old data tends to be forgotten by the system. A learning rate with some inertia might be interesting to avoid data noise.

Instance-based or Model-based learning?

Generalization is an important task of machine learning systems. Algorithms must be able to generalize new instances, which means handling incoming data.

  • Instance-based learning: generalizes new data using a similarity measure. The system will compare incoming data with already-learned data and try to correctly assign new instances
  • Model-based learning: generalizes from a set of examples building a model from them. Such model will be used to predict where the incoming data will fit.


In order to build a machine learning system for  your needs, the following points need to be specified:

  • How is it going to be trained? Supervised, Unsupervised,  Semisupervised or through Reinforcement Learning?
  • How is it going to learn? Incrementally (online) or through batch learning (offline)?
  • How is it going to generalize? Instance or model based?

In Part 2, we’re going to build a machine learning system from scratch! Subscribe to my newsletter to keep updated.

Questions? Comment below or email them to

Essential Python Machine Learning Libraries

Essential Python libraries which will save you a lot of time when dealing with data analysis and machine learning. I’ve listed the most used libraries and their main uses.



  • Numerical Python, used for numerical computing.
  • Fast multidimensional array object ndarray
  • Operations between arrays
  • Reading and writing array-based datasets to disk
  • Linear algebra, fourier transform, random numbers
  • C API to enable extensions and C or C++ code to access data structures and computational facilities




  • High level data structures and functions. Work with structured or tabular data fast and easy.
  • DataFrame – tabular, column-oriented data structured with both row and column label, and the Series, a one-dimensional labeled array object
  • NumPy + relational databases
  • Reshape, slice and dice, aggregations, subsets of data
  • Data structures with labeled axes supporting automatic or explicit data alignment
  • Integrated time series functionality
  • Same data structured to handle both time series and non-time series data
  • Arithmetic operations and reductions that preserve metadata
  • SQL functions
  • Flexible handling of missing data


  • Plots and other two-dimensional data visualizations.


  • Collection of packages addressing a number of different standard problem domains
  • scipy.integrate: numerical integration routines and differential equation solvers
  • scipy.linalg: Linear algebra routines and matrix decompositions
  • scipy.optimize: Function optimizers(minimizers) and root finding problems
  • scipy.signals: signal processing tools
  • scipy.sparse: sparse matrices and sparse linear system solvers
  • scipy.special: SPECFUN, gamma function
  • scipy.stats: continuous and discrete probability distributions (density functions, samplers, continuous distribution functions), various statistical tests and more descriptive statistics


  • Classification: nearest neighbors, random forest, logistic regressions, SVM…
  • Regression: Lasso, ridge regression…
  • Clustering: k-means, spectral clustering…
  • Dimensionality reduction: PCA, feature selection, matrix factorization…
  • Model selection: Grid search, cross validation…
  • Preprocessing: feature extraction and normalization


  • Statistical analysis and econometrics
  • Regression models: Linear regression, generalized linear models, robust linear models, linear mixed effect models…
  • Analysis of variance
  • Time series analysis
  • Nonparametric methods: Kernel density estimation and regression
  • Visualization
  • Statistical inference, uncertainty and p-values

How To Handle Meetings

In most cases, it’s not always the most popular person who gets the job done.

From all my experiences in the business world, meetings are (almost) always terrible. In the absence of leaders who would set things straight, meetings flow just as unmanned ships at the ocean.

Meetings have the obligation to be productive, otherwise, it’s simply a waste of time. Of course, that’s different than building a solid and healthy relationship with your co-workers or teammates. That’s extremely important, but business meetings must be designed to be productive and getting things done.

Do you even wonder why? Businesses are supposed to deliver value in the form of physical or digital products and services. Meetings are supposed to set and refresh operational points, data, and intelligence among leaders and workers – and that won’t get done by screwing around.


What is a business meeting?

A meeting is any encounter between two or more people to talk about anything.

A business meeting is an encounter between two or more people to talk about business perspective, progress update, feedback receival or any subject valuable and indispensable to operations.

Here’s a common scenario that we’ve all been through:

a meeting starts to talk about subject XYZ and, for the next thirty minutes, XYZ is not touched. Instead, participants engaged in what I call “ice-breaking conversation” – which is nothing but bullshit.

I’m like him in 97.492% of all meetings I attend

How to Handle Meetings

There are ways of making a meeting productive – if you’re an executive, that’s your obligation. Meetings must be work sessions, not bull sessions.

1. Decide what kind of meeting it will be

Different meetings require different types of preparation to have different results.

If there’s a meeting to write a marketing campaign, press release or something that needs to have a draft, a member or team has to prepare a draft beforehand. Otherwise, your meeting will be filled with brainstorms and conversation that won’t get the job done.

Objective meetings are supposed to ship the necessary/requested results at a glance. If you’re developing a new product, then you may arrange brainstorm/creative sessions, modularization, operations and scaling sessions.

If you’re dealing with a crisis, you may need results even faster. Delegating the right functions to the right teams will be a key to shipping such results.

Also, leaders can set meetings to happen in strategic parts of the day. Priorities should be handled early in the week – and that’s a nice excuse to arrange an 8 AM on Monday. Brainstorming or product development events may be handled after priorities are cleared.

Informal meetings, on the other hand, could be arranged

2. Reports

If one or all members report, the meeting should be confined to that matter.

Either there should be no discussion at all or the discussion should be limited to make the points clearer. If all reports must be discussed, then they should be previously emailed or handled to each member. Also, each report should have a predefined time-space.

3. Product Development

Product development and brainstorming sessions could be disastrous if there are no rules to be respected. Here are some points that might help you organize creative sessions:

  • defining the beginning and end of the meeting. If you planned a 1-hour session, such timeframe must be followed, especially if general thoughts are leading nowhere and except if thoughts and points are being extremely productive, then such meeting may be extended;
  • documenting valuable (and only valuable) points. These are the ideas and points that should be discussed or developed in next sessions or operation meetings;
  • don’t ask for unnecessary stuff. Just don’t.

4. Use your weapons

Slack, Google Drive, Dropbox, Evernote and thousands of other apps are there to make your day more productive. Stick to one or two platforms and integrate them as much as necessary – one of the things I offer in my consulting hours.

Now it’s time for you to speak:

  1. How do you handle your meetings?
  2. Which strategies do you think are valuable?

Comment your answers or email me them @

Do you publish online content? I strongly recommend this article.





Quit Your Bullshit Content

One of the biggest issues of my consulting clients who already post online content is focus. The goal is increasing customer attention and engaging in a further sale, but their original content ended up wasting their own time and guiding their customer nowhere.

It’s a mistake I’ve seen in all kinds of platforms – Facebook, Instagram, Email, YouTube…

If their lead clicked to see the content, he or she would just close the window or roll the screen after three seconds. You can imagine what kind of metrics such boring and annoying content produced – even with a considerable amount of access, qualitative metrics sucked. 

Little or no customer retention, no further interest in other posts, no new subscriptions, unfollowing, and no sales. That’s all I have to deal while designing new strategies and funnels for clients.

Relieve yourself from pointless content

Design is where we should start while reviewing our content strategy. Relevance and design are fundamental traits a product must have to succeed – a brand may either supply an existing demand or create such demand.

In both cases, you need a decent marketing:

  • Customer discovery
  • Market research
  • Goals and metrics to follow
  • Channels to act
  • Languages to speak*

* Each platform has its very own language to diverse audiences. Think of a LinkedIn user and a Tumblr user.

If you’re selling an “intellectual property” product such as a book, service, courses, or even your personal brand, providing valuable content on the right channels is a must.

If your product is a fashion outfit or a movie, for instance, you might want to communicate more visually on Instagram or YouTube. Your value ladder will be established if your product seems good and your campaigns are persuasive enough.

In both cases, offering valuable or creative content may not be enough – that’s because people are extremely bored. By being bored and mentally tired, they might need a better and continuing approach to drive an action. I’ll talk about it in the next lines.


1. Know your Target Audience

This is always where to start when thinking about content strategy. Identifying your target audience must be a seriously made investigation, and also should be documented. Here’s what to look for:

  • Audience stereotype(s)
  • Platforms they are located (Instagram, YouTube, LinkedIn…)
  • How many hours they’re online on a daily basis
  • What kind of content they consume
  • Which websites they access

Such study should give us enough information to design our publication blueprints. Different products may have different audiences. A common mistake from retailers is to advertise all their products using a single strategy, or using little differentiation patterns such as gender only.

Advertising and selling apparel to a 35-year-old woman from Boston is different than advertising and selling to a teenager from California. Make sure to map your audience.

2. Set your objectives straight

Ideally, brands should design different posts for different campaigns. Imagine Hollister advertising their new summer collection – there’s a whole campaign behind each advertisement piece. This will avoid losing track of the right metrics and showing bullshit content to a specific targeted audience.

I started to realize that keeping a campaign with organic traffic, creative and interesting posts and improving everything analyzing our customer and metrics was not enough. And yes, that’s most of the story to get your campaign to be successful.

At the end of a busy day, while analyzing some average metrics from a campaign, I realized that every piece of ad could have cognitive elements to improve customer response. In my dictionary, customer responses are sales and brand advocacy. Period.

By cognitive elements, I mean everything that’s in the body of the post. Images, text, language, colors, call to action, duration, tones, and sounds if we’re talking about a video.

So I designed three Instagram posts which had “handmade” engagement logic. I emailed our designer to increase some color tones and change some backgrounds – it was a candy store from Dallas, so we were talking about chocolate, candies, and some colorful stuff.

After 24 hours, I had all I needed in hand: copywriting was ready, three images and one video ready to fly. I posted once a day, for three days, and then I would get back to the original campaign funnel.

Online orders boosted up to 25% more than the original campaign and in-store sales increased by 34% one day after the third post.

I saw myself obliged to redesign the whole campaign using cognitive elements – it has performed fantastically well both on paid ads to capture new followers and also for the organic traffic we had already reached.


3. Act Cognitively

Well, we all do it, right?

Cognitive Marketing is not your Holy Grail. No marketing, no book, no consultor, and no formula will be your Holy Grail.

David Ogilvy once taught us that great marketing will help you sell your product – but just once. If people are disappointed and no improvements are made, they won’t buy products from you again. Also, if unhappy customers are left behind with no support, it will be even harder.

Cognitive elements will increase actions from your campaigns. It could be a subscription to your YouTube channel, following your Instagram or Facebook account, buying a product or recommending your brand to someone.


If you want to learn how Cognitive Marketing will help brand,                                           email me @


How to write a function in Swift

Functions are part of a core component in programming. Here’s what you should kn…

First time here? Get started in my Swift guide.

Functions are coded to perform actions and manipulate variables. They are part of a core component in programming. In Swift, functions are very simple to write and, if you have a basis of Object Oriented Programming, you can reuse them throughout your program.

Functions are used in all sorts of programs. You can write a function to calculate the number of calories you ate this morning (I ate a lot, btw), how many miles you should run to burn them or even if you can purchase a book using your credit card!

Defining a function

In Swift and other programming languages, a function is defined by using the func keyword followed by its name, optional parameters, execution path and a return value. Continue reading “How to write a function in Swift”

Getting started in Swift

This post will serve as an intro and reference guide to all Swift posts and tutorials…


This post will serve as an intro and reference guide to all Swift posts and tutorials which I’ve been working on to post on my blog. Feel free to comment your thoughts and feedback in this section and all posts from me you come across. 

I will keep this section updated on a logical path for you to get the most of this great language!  

Ready to Swiftify?

Continue reading “Getting started in Swift”

What is a synthetic derivative?

A synthetic position, for all matters, could be created by buying or shorting the…

“The past is certain, the future obscure”

 Thales of Milletus 

First of all, we must define what a derivative is. It is simply a security whose price relies upon another asset – called an underlying asset. An S&P future and an American call option are derivatives.

A security is a non-negotiable financial instrument with monetary value – you may sell it for cash. It could be the minimum part of a company (a stock) or one bushel (60 pounds) of soybeans.

Continue reading “What is a synthetic derivative?”

Bad players will be thrown away

People are more likely to buy from people or brands they’ve known for a while in social media, which gave them steady relevant content…

The internet and social media set new standards in the whole commercial process. People are more likely to buy from people or brands they’ve known for a while in social media, which gave them steady relevant content.

Most entrepreneurs (or fakepreneurs) still don’t know what’s really going on in marketing nowadays. Those who claim to be “experts” in digital marketing don’t even know what real marketing is about, especially in the 21st century.

I explain this very well in this article:

Traditional Marketing is not dead

The truth is: bad players will be thrown away. What do I mean by that?

  • Bad communicators won’t sell their products
  • New products by unknown brands will struggle to reach their audience and sell
  • Billion-dollar brands will disappear because they’re ignoring this movement
  • Fakepreneurs will continue to rise, and then disappear
  • Millions of dollars will still be burned in wrong advertising, then comes bankruptcy
  • Relevance is always king

No doubt all that came with the internet and the mobile world. All sorts of businesses should constantly be reviewing their marketing and communication strategies to satisfy their costumer.

The customer is eager and knows what to buy

This perhaps hasn’t been and won’t be a universal truth, but that’s how you should be handling your marketing.

Especially if you’re new to the market and won’t be burning millions into paid advertising, you should be offering high relevance content to your leads and people who follow you on social media. It’s the type of content that will make them stop scrolling their feed to read, watch and pay attention.

Continue reading “Bad players will be thrown away”