Categories
54cuatro-EN

Use Case: Animal Monitoring

Executive Summary

Achieving the UN sustainable development goal of a “world with zero hunger” by 2030 will require being more productive, efficient, sustainable, inclusive, transparent and resilient. This objective requires an urgent transformation of the current system of agriculture, livestock and food in general.

Industry 4.0 is transforming many industries with disruptive technologies like #Blockchain, Internet of Things (#IoT), Artificial Intelligence (#AI). In the agricultural and food sector, the spread of mobile technologies, IoT and Edge computing are already improving the access of small producers to innovative developments that improve their operations.

At #54cuatro we are convinced that the great challenge of companies like ours is to democratize access to these technologies that until recently were exclusive to large corporations.

In the world there are different solutions that allow monitoring all kinds of elements and, of course, animals. Animals of all types can be monitored, from cows, bulls, sheep, horses, including wild or water animals.

Just as there is communication between industrial equipment of the #M2M (machine to machine) type, we take on this challenge of putting together a communications system that we call #A2M (Animal to Machine).

Our methodology, unlike standard products, implies a custom development taking into account specific problems.

In this note we will develop the approach used for the Buffalo Monitoring project in the province of Chaco, Argentina.


Problem to solve

The situation that we found has to do with economic losses of about USD 300,000 due to the failure to locate the animals and the failure to detect heat, which meant that the Bufalas were not pregnant. The additional complexity came from the fact that the field is 6,000 hectares.


Analysis

Buffalo rodeo

Monitoring on small, bounded farms is simple, but given so much terrain we had to change the focus of the project. The first thing we did was investigate the behavior of the bubalino. Thanks to different entities such as the International Buffalo Federation, we detected the following patterns:

• The bubalino has 150 sweat glands per cm2, unlike the cow that has 1,500. This means that it needs water to cool itself. This information helps us detect frequently located areas based on temperature.

• It lives on average 25 years unlike the cow that lives 10, and can give 16 calves against 6 that the cow gives. This marks the importance of locating the females to avoid losing heat cycles.

• When the Buffalo is in heat, it allows itself to be chased by the male or allows it to rest on the rump. We can detect symptoms of heat taking into account the behavior of their movements.

• The sick or life-threatening animal moves away from the group. This is important to control the cause of death and recover the sensors.


Solution Design

With these patterns we begin to design 3 things. On the one hand, the network coverage to detect the position of the animal. Secondly, the type of sensor, given that because of how the animal behaves, we could not use a common sensor because it would not last so long submerged, because the animal would rub it against trees to remove it, etc. And on the other hand, reporting patterns that allow us to detect location, possible heat conditions, disease, etc.

We design the sensor based on behavior. What we did was test designs on 3D models.

For connectivity we install:

• 3 communication masts at 3 full winds, 36 meters high, anti-rotor star, beacon. Civil work: high anchorages, field protection fence with doors, and there we set up 3 LoRa Gateways.

• Ubiquiti IP transport radio links and Mikrotik PoE routers in 10U outdoor cabinets, autonomous using solar energy (100ah panels and batteries).

Parte del modelo impreso 3D

Each animal was transformed into a transmission node. We use the geolocation platform developed by Odea to determine the positioning and cross the GPS data with the Ear Tag data that contains:

  • UID Stick
  • ID Posicionamiento
  • Name
  • Genre
  • Birth date
  • Position
  • Status
  • Vaccination
  • Weight

Additionally, we incorporated other types of Datapoints that were of interest to us, such as climatic factors and health schedules.

App mobile con seguimiento de cada animal

Finally, to reduce the detection times of the animal’s state, we adopted a drone equipped with a flight plan, a multispectral flight plan, and a wide visual range, thanks to the people of Runco who helped us find the best equipment for what we needed.

Drone view in RGB and NDVI

Results

With the implementation of our solution, the platform will be providing insights from each node, which will nourish our data catalogs and allow us to adjust pattern detection algorithms. Those detected patterns should enable some key results:

  • Find each animal
  • Detect heat signals
  • Understand the behavior according to temperature, humidity, rainfall, etc.
  • Optimize the control of the weight of each animal and the feeding based on the controls.
  • Reduce risks and mortality

    Please prove you are human by selecting the car.

    Categories
    54cuatro-EN

    Use Case: Business Intelligence for SMB

    Executive Summary

    Our BI solution for small and medium businesses allows you to have a vision of all the functional areas of the company, from accounting, to warehouse or production. Data and reports that can be easily shared within the company, adapted to the needs of the different departments of the company.

    This tool is designed so that the data treatment is carried out in real time with the necessary agility.

    Benefits

    1- Access to information provides a rapid return on investment (ROI).
    2- It allows senior management to obtain a summary of the business in just the right time from any device and at any time. This grants speed and flexibility of action.
    3- Help each area to gain autonomy. It provides key metrics for adjusting all internal gears for better business profitability.
    4- It allows to standardize reports and to evaluate the different areas with the same criteria.
    5- Eliminate repeated tasks that do not add VALUE and unify the data so that each area can analyze their numbers and that they coincide with those of another area.

    NPS (Net Promote Score)

    On a scale of 1 to 10, to what extent would you be willing to recommend the company to your relatives or acquaintances?

    Level 1Level 2
    Ask predisposition for recommendation
    Segment the client portfolio
    Calculate NPS
    Inquire about the reasons for the recommendation

    NPS Indicators:

    Descriptive Indicators
    What happens when there is a customer interaction with any point of contact in the company. Phone waiting time or employee friendliness are included in this category: activities and processes that are valued by customers and create their perceptions.

    Indicators of Perception
    They express what customers think and feel about what has happened in their interaction with the company. It is what is measured with scales of satisfaction or effort for the relationship.

    Outcome Indicators
    What customers are likely to do as a result of their interactions. Based on customer interaction and evaluation, you decide whether to continue, buy more, or recommend the company to your friends and family. Thus, loyalty indicators – and the #NPS is one of them – constitute the “results” that companies seek. The customer who consumes more, spends more time with the company and brings in new customers, is its engine of growth.

    These indicators allow you to relate what a company does (Processes), with what they think about their actions (Perceptions), with the result for the business through what customers do (Recommendation). And as we already know, a high recommendation or loyalty generates economic growth (#ROI).

    They trust our Business Intelligence services

      Please prove you are human by selecting the flag.

      Categories
      54cuatro-EN

      What DataOps solves

      We already made entries of #DataOps (data operations), but to refresh the memory we say: it is the combination of people, processes and technology that allow us to handle data that is useful for #developers, #datascientist, #operations, applications and tools (eg #artificial #intelligence) , allowing to channel the data, keep them safe during their life cycle and configure a #governance over them.

      The faster we manipulate and deliver the data, the faster the #growth for the business will be due to the use of the information, therefore, its objective is to promote data management practices and procedures that improve the speed and accuracy of the analysis.

      The idea of ​​this post is to make a short-list with 5 basic problems that are solved with the implementation of DataOps in an organization.

      Let’s see what DataOps solves:

      #Bug fixes: In addition to improving the agility of development processes, DataOps has the power to boost time to respond to errors and defects by significantly reducing times.

      #Efficiency: in DataOps, data specialists and developers work together and, therefore, the flow of information is horizontal. Instead of comparing information in weekly or monthly meetings, the exchange occurs regularly, which significantly improves communication efficiency and the final results.

      #Objectives: DataOps provides developers and specialists in real-time data on the performance of their systems.

      #DataSilos: DataOps faces the data silos that are generated in different departments or management of a company, many groups see their operations as inviolable “fifths” in which each silo is a barrier to success to implement better management strategies of data. The implementation of a correct governance is crucial for obtaining all the data sources that the organization requires to meet its business objectives.

      #Skills: It is a fact that data professionals do not abound. The lack of availability of the right people to manage #BigData & #BI projects means that the projects are not executed in a timely manner, or worse, that they fail. It is a mistake to put more data on a computer that does not have the knowledge and resources to handle it.

      We invite you to join our Linkedin Group of “DataOps in Spanish”

      [popup_anything id=”2095″]

      Categories
      54cuatro-EN

      Some loose ideas about containers

      It is not new that the containers are already an industry standard, enable agile developments, improve time to market, improve #analytics and generate a quickly verifiable ROI.

      We are in a #HYPE moment of the #Microservices era. And we see a lot of adoption of this architecture but there is a long way to go, and there are still many who could not make any progress in this regard and we are already talking about #ServiceMesh, a new component that facilitates communication.

      But what is Service Mesh?

      Service Mesh is a layer that improves the format in which applications built on Microservices communicate with each other. Previously in #Monolithic or #SOA developments, calls were made within each application or between layers. But in the new scheme, calls are replaced by calls made via #API communications.

      This has significant advantages, as it allows developers to focus on business logic and not have to work on the communications layer. But there is a lack of standardization of API communication, as there is no defined protocol for API creation.

      This is when Service Mesh becomes important. Why?

      Because it is a mesh of services that stands above microservices, being a low latency communications solution that gives us discovery for new services, and with it the possibility to create rules of load balancer, authentication, encryption, among other things , and also allowing us to have monitoring to ensure the availability of our APIs.

      There are many Service Mesh on the market such as #Istio or #Envoy, and from #OpenShift version 4 there is the OSMO (Openshift Service Mesh Operator) which enables the possibility of better tracking, routing and optimization of application communication.openshift y service mesh

      If you need to modernize the architecture write or call me, so we can determine the level of maturity if your company to adopt microservices, the level of practice Agile / DevOps and with an assessment we can accompany you to the next level.

      Posted by our Sales Director, Rodrigo Yañez in https://www.linkedin.com/pulse/algunas-ideas-sueltas-sobre-containers-microservicios-rodrigo-ya%C3%B1ez/

      [popup_anything id=”2095″]

      Categories
      54cuatro-EN

      It is the time of DataOps. Know the details

      DataOps, is a methodology that emerged from Agile cultures that seeks to cultivate data management practices and processes to improve the speed and accuracy of the analysis, including access, quality, automation, integration and data models.

      DataOps is about aligning the way you manage your data with the goals you have for that data.

      It is not bad to remember part of the Manifiesto DataOps:

      1. People and interactions instead of processes and tools
      2. Efficient analytics solutions instead of comprehensive documentation
      3. Collaboration with the consumer instead of contractual negotiations
      4. Experimentation, interaction and feedback instead of direct extensive design
      5. Multidisciplinary ownership of operations instead of isolated responsibilities.

      We are going to give a clear example of #DataOps applied to the reduction of the customer #turnover rate. You can take advantage of your customers’ data to create a recommendation engine that shows products that are relevant to your customers, which would keep them buying for longer. But that is only possible if your data science team has access to the data they need to build that system and the tools to implement it, and can integrate it with your website, continuously feed new data, monitor performance, etc. For that you need a continuous process that will require you to include information from your engineering, IT and business teams.

      In order to implement solutions that add value, it is necessary to manage healthy data. Better data management leads to better data, and more available. More and better data lead to a better analysis, which translates into better knowledge, business strategies and greater profitability.

      DataOps strives to foster collaboration between data scientists, engineers and IT experts so that each team works synchronized to leverage data in the most appropriate way and in less time.

      DataOps is one of the many methodologies born from #DevOps. The success of DevOps lies in eliminating the silos of traditional IT: one that manages development work and another that performs operational work. In a DevOps configuration, software implementation is fast and continuous because all the equipment is linked to detect and correct problems as they occur.
      dataops

      DataOps is based on this idea, but applying it throughout the data life cycle. Consequently, DevOps concepts such as CI / CD are now being applied to the data science production process. Data science teams are taking advantage of software version control solutions such as GitHub to track code changes and container technology such as Kubernetes and Openshift to create environments for Analysis and deployment of models. This type of data science and DevOps approach is sometimes called “continuous analysis.”

      However. So far the whole theory. But … How do I start implementing DataOps?

      This is where you should start:

      • #Democratize your data. Remove bureaucratic barriers that prevent access to the organization’s data, any company that strives to be at the forefront needs data sets that are available.
      • Take advantage of #opensource platforms and tools. Platforms for data movement, orchestration, integration, performance and more.
      • Part of being agile is not wasting time building things that you don’t have to do or reinvent the wheel when the tools your team already knows are open source. Consider your data needs and select your technology stack accordingly.
      • Automate, automate, automate. This comes directly from the world of DevOps, it is essential that you #automate the steps that unnecessarily require a great manual effort, such as quality control tests and data analysis pipeline monitoring.
      • Enable self-sufficiency with #microservices. For example, giving your data scientists the ability to implement models such as #APIs means that engineers can integrate that code where necessary without #refactoring, resulting in productivity improvements.
      If you want to know more, we recommend entering our Linkedin group, DataOps en Español.
      [popup_anything id=”2095″]
      Categories
      54cuatro-EN

      NoSQL Governance

      NoSQL databases have grown significantly in recent years and now almost all companies have #NoSQL installations as part of their business data program. #Gartner estimates that 90% of the data is ‘unstructured’. In an increasingly #Agile / #DevOps / #DataOps world, the use of NoSQL bases for application development is considered a great advantage to accelerate time to market time for software.

      Developers can create the scheme and design the database through their own code without the participation of traditional #DBA teams. But the lack of formal design and inadequate processes can generate problems for the application and can affect the general governance of company data. For example, it is difficult to determine what is stored in what place. It also poses a great challenge for auditing and compliance reports for companies.

      As there are usually possibilities for more than one type of NoSQL bd to be used in conjunction with an RDBMS db, a more robust data governance framework is required to understand the data that is stored in such a variety of database technologies.

      Finally, DBAs and other data professionals may now have to review the application code to understand the scheme and determine if the problem is in the data, the scheme or the infrastructure, which makes troubleshooting more complex.

      It is really very important that ‘data-driven’ or ‘data-driven’ organizations adopt a new thought that involves the challenge of taking advantage of the latest NoSQL database technologies and also trying to maintain the integrity, quality and governance of the underlying data.

      [popup_anything id=”2095″]