Robotics

Robotics is an interdisciplinary research area at the interface of computer science and engineering. Robotics involves the design, construction, operation, and use of robots. The goal of robotics is to design intelligent machines that can help and assist humans in their day-to-day lives and keep everyone safe.


Robotics Projects

Robot Team Coordination for Target Tracking using Fuzzy Logic Control

Robot team coordination is an important aspect of many applications, including target tracking, where multiple robots work together to locate and follow a moving target. Fuzzy logic control is a popular method for coordinating robot teams, as it allows for flexibility and adaptability in uncertain environments. With fuzzy logic control, robots can make decisions based on imprecise or incomplete information, such as when tracking a target that is moving unpredictably. By using fuzzy logic control, robot teams can work together more effectively and efficiently, ultimately improving the accuracy and success of target tracking tasks. This technology has the potential to revolutionize industries such as search and rescue, surveillance, and agriculture, where coordinated robot teams can make a significant difference in achieving the desired outcomes.


Autonomous Target Tracking Robot

Autonomous target tracking robots are becoming an increasingly popular solution for a wide range of applications, including surveillance, security, and search and rescue missions. These robots are equipped with advanced sensors and algorithms that allow them to detect and track targets autonomously, without the need for human intervention. Using a combination of cameras, lidar, and other sensors, these robots can accurately track targets in real-time, even in challenging environments. Additionally, many autonomous target tracking robots are equipped with obstacle avoidance capabilities, allowing them to navigate around obstacles and continue tracking their targets without disruption. Overall, autonomous target tracking robots offer a powerful and versatile solution for a range of industries and use cases, offering increased efficiency, accuracy, and safety in a variety of challenging scenarios.

There are several industries that are currently using autonomous target tracking robots. One of the most common industries is the security and surveillance industry, where these robots are used to monitor and track suspicious activities, identify potential threats, and provide real-time situational awareness to human operators. Another industry that is using autonomous target tracking robots is the agriculture industry, where these robots are used to monitor crops, track livestock, and perform other tasks related to precision agriculture. In the search and rescue industry, autonomous target tracking robots are used to locate and track missing persons, while in the military, they are used for reconnaissance and surveillance missions. Finally, autonomous target tracking robots are also being used in the entertainment industry, where they are used to capture dynamic footage for movies, TV shows, and other media productions.

Autonomous target tracking robots offer several advantages over traditional methods of target tracking. One of the main advantages is their ability to operate autonomously, without the need for human intervention. This allows them to work continuously for extended periods of time, without the need for breaks or rest. Additionally, autonomous target tracking robots are equipped with advanced sensors and algorithms that allow them to detect and track targets with greater accuracy and precision than traditional methods. They can also operate in challenging environments, such as low light conditions or harsh weather, where traditional methods may struggle. Another advantage of autonomous target tracking robots is their ability to provide real-time situational awareness to human operators, allowing them to make faster and more informed decisions. Overall, autonomous target tracking robots offer a more efficient, accurate, and flexible solution for target tracking, making them an increasingly popular option for a wide range of industries and use cases.


Motor Identification using Matlab

Motor identification is an important process in the design and control of electric motors. It involves determining the electrical and mechanical parameters of the motor, such as resistance, inductance, and torque constant, which are crucial for accurate control and performance optimization. Matlab is a popular tool for motor identification projects, as it offers a range of functionalities for modeling and simulation. With Matlab, users can perform advanced data analysis and modeling techniques, such as system identification and parameter estimation, to accurately identify the motor’s parameters. This information can then be used to design and optimize motor control systems, improving efficiency, performance, and reducing energy consumption. Overall, motor identification using Matlab is a critical process in the design and control of electric motors, and can have a significant impact on the performance and efficiency of motor-driven systems.

There are several common challenges that can arise in motor identification projects. One of the main challenges is the accuracy and reliability of the data collected during the identification process. Even small errors or inaccuracies in the data can lead to significant discrepancies in the identified motor parameters, which can negatively impact the performance of the motor control system. Another challenge is the complexity of the motor model and the identification process itself. Electric motors can have complex dynamics, and accurately modeling and identifying their parameters can require advanced mathematical and analytical techniques. Additionally, the identification process can be time-consuming and computationally intensive, requiring significant processing power and resources. Finally, the availability and quality of measurement equipment can also be a challenge, as high-quality sensors and measurement devices are required to accurately measure and collect the necessary data. Addressing these challenges requires careful planning, attention to detail, and expertise in motor identification techniques and Matlab software.


Controlling an Electric Wheelchair based on Eye-Tracking

Controlling an electric wheelchair based on eye-tracking is a project that aims to improve the mobility and independence of individuals with disabilities. The project involves using eye-tracking technology to allow users to control the movement of an electric wheelchair by simply looking at different areas of a screen. The eye-tracking system is connected to a computer, which processes the user’s eye movements and translates them into commands for the electric wheelchair. The system is designed to be customizable and adaptable to the needs of individual users, allowing them to set the sensitivity and speed of the eye-tracking system according to their preferences. The project has the potential to significantly improve the quality of life for individuals with disabilities, providing them with greater autonomy and freedom of movement. Additionally, the project highlights the potential of eye-tracking technology for a range of applications beyond wheelchair control, including assistive technology for communication, gaming, and virtual reality.

The eye-tracking system used to control an electric wheelchair works by using a camera or sensor to detect the user’s eye movements and translate them into commands for the wheelchair. The camera or sensor is mounted near the user’s eyes, and tracks the position and movement of the pupils, as well as other eye movements such as blinks and saccades. The eye-tracking software analyzes the data from the camera or sensor and determines where the user is looking on the screen. The software then translates this information into commands for the electric wheelchair, such as forward, backward, left and right movements. The system can be calibrated to the user’s individual eye movements, allowing for greater accuracy and ease of use. The eye-tracking system can also be combined with other assistive technologies, such as voice recognition or switch control, to provide users with additional control options. Overall, the eye-tracking system represents a powerful and innovative solution for individuals with disabilities, allowing them to control their environment and improve their quality of life.


Industrial (atwork) Robot Planning and Navigation

The Industrial Robot Planning and Navigation Project is an innovative initiative focused on improving the efficiency and safety of industrial robot systems. The project aims to develop advanced planning and navigation algorithms that enable industrial robots to move and operate in complex environments, such as manufacturing plants and warehouses. These algorithms are designed to optimize the motion planning of industrial robots, ensuring that they can move smoothly and safely around obstacles, and avoid collisions with other objects or machines. The project involves the use of advanced sensors, such as lidar and cameras, to gather real-time data about the robot’s surroundings, which is then processed by sophisticated algorithms to generate optimal paths and trajectories for the robot. The Industrial Robot Planning and Navigation Project has the potential to significantly improve the performance and efficiency of industrial robot systems, reducing costs and improving safety in industrial settings.

The Industrial Robot Planning and Navigation Project has several potential benefits for industrial settings. One of the main benefits is improved efficiency and productivity. By optimizing the motion planning of industrial robots, the project can reduce the time required for tasks such as material handling, assembly, and inspection, leading to faster production times and increased output. Additionally, the project can help to reduce errors and defects in the manufacturing process, improving product quality and reducing waste. Another benefit of the project is improved safety. By enabling industrial robots to navigate complex environments more effectively, the project can reduce the risk of accidents and injuries in industrial settings, improving the safety of workers and reducing liability for companies. Finally, the project has the potential to reduce costs associated with industrial robot systems, such as maintenance and repair costs, by improving the reliability and durability of the robots. Overall, the Industrial Robot Planning and Navigation Project represents a powerful and innovative solution for improving the efficiency, safety, and profitability of industrial settings, and has the potential to revolutionize the field of industrial robotics.


Detection of Eye Direction to Control the Robot using LabView

The Detection of Eye Direction to Control the Robot using LabView Project is an innovative initiative focused on developing a system that allows users to control a robot simply by moving their eyes. The project involves using an eye-tracking system to detect the direction of the user’s gaze, which is then translated into commands for the robot using LabView software. The system is designed to be customizable and adaptable to the needs of individual users, allowing them to set the sensitivity and speed of the eye-tracking system according to their preferences. The project has the potential to significantly improve the accessibility and usability of robotic systems, providing users with greater autonomy and control. Additionally, the project highlights the potential of eye-tracking technology for a range of applications beyond robot control, including assistive technology for communication, gaming, and virtual reality.

LabView software is used to translate eye movements detected by the eye-tracking system into robot commands. The eye-tracking system detects the direction of the user’s gaze and sends this information to the LabView software, which processes the data and generates commands for the robot. The LabView software can be programmed to recognize specific eye movements or gaze patterns, which can be mapped to different robot commands such as movement, rotation, and grasping. The software can also be customized to adjust the sensitivity and speed of the eye-tracking system, as well as the range of commands available to the user. Overall, the LabView software serves as the interface between the eye-tracking system and the robot, allowing users to control the robot simply by moving their eyes in a specific direction or pattern. The system represents a powerful and innovative solution for improving the accessibility and usability of robotic systems, and has the potential to revolutionize the field of assistive technology.

The LabView software can adjust the sensitivity and speed of the eye-tracking system by changing the parameters of the eye-tracking algorithm. The eye-tracking algorithm is responsible for detecting the position and movement of the user’s eyes, and the LabView software can modify the algorithm to adjust the sensitivity and speed of the eye-tracking system. For example, the LabView software can adjust the threshold for eye movement detection, making the system more or less sensitive to small eye movements. The software can also adjust the refresh rate of the eye-tracking system, controlling how often the eye movements are sampled and processed. Additionally, the LabView software can adjust the mapping between eye movements and robot commands, allowing users to customize the sensitivity and speed of the system to their individual needs and preferences. Overall, the LabView software provides a flexible and customizable solution for adjusting the sensitivity and speed of the eye-tracking system, ensuring that the system is optimized for the needs of individual users.


Keeping and Removing Goods by the Warehouse Robot

At first, the goods with different colors and expiration dates are in front of the robot, and the robot must examine each of these goods and store them in its memory. The robot must prepare a list containing information about the goods (shape, color, expiration date) and arrange the list in the order of expiration date and display it on the display of the operator’s room. The robot must then enter the warehouse room and inspect the goods in the warehouse. There are some goods in stock that some of these goods can be identical in terms of characteristics, and some of these goods are the same as the goods that were originally listed; With the difference that the QR Code is not pasted on the goods inside the room and as a result, the expiration date of the goods is not directly visible; Therefore, the robot must find out the expiration date of the goods by relying on the characteristics of other goods (color and shape) and referring to the list prepared in the first step. Then select one of the items in the list prepared in the first step, which has an expiration date closer to expiration, identify all available items of that type in the room, and be ready to leave the storage cycle; Finally, the robot must display the result on the operator’s room display after leaving the room completely and stopping behind the mentioned frame.


Warehouse Robot Categorization of the New Products

The three chambers are spaced apart. In each chamber, goods have important information such as color characteristics, shape, and storage temperature. The storage temperature is visible to the robot in the form of a QR Code affixed to each product. In the first step, the robot must enter each of the rooms and after examining the goods inside the room, prepare a list of them. Then, leaving the room, display the prepared list on the operator display. Finally, it returns to the start area and displays a list of products from all three chambers, by chamber, on the operator screen. On the side of the starting area is a commodity. The robot must examine the product to decide which room the product belongs to. Then move to the relevant room and enter it, stop in that room, and display the details of the goods in front of it, along with other goods in the room, along with the message related to the end of the operation, on the operator’s screen.