SOFTWARE DEVELOPMENT PROPOSAL PREPARED FOR Steve Manz TS Cyanergy PREPARED BY Usama Noor Meshlogix Solutions (Pvt.) Ltd..
JAN 25, 2024 STEVE MANZ TS CYANERGY Sugar Land, TX Dear Steve Manz, Re: Enclosed Software Development Proposal Please find enclosed our detailed software proposal for your kind consideration. At Meshlogix Solutions, we are aware that creating client-oriented software takes a mixture of technical excellence and clear communication and our firm hires only the very best to ensure you receive both. We know that every client is unique and we strive to deliver an individual, innovative and affordable proposal every time and to follow it through with an outstanding delivery which is both on time and within budget. We have over 10 years of development experience across the globe. We breathe Microsoft technology, Angular and react, be it for small, medium and large scale web applications. Anything related to MS technologies, Python, Node, angular and react is all covered under the umbrella of Meshlogix Solutions. Finally, we realize that you are very busy and wanted to thank you in advance for your time spent reviewing our proposal. Yours Truly, Usama Noor.
1. Project Overview As discussed with Mohammed Khambaty and Steve Manz, you are looking to implement a cutting-edge Gas Emissions Monitoring System for engines on drilling rigs and ships. This system will enable real-time tracking of emissions, ensuring environmental compliance and operational efficiency. In the current phase, we will address assets, engines, reporting, analytics, billing, and the settings page will empower users to configure and personalize their notification preferences, providing flexibility in choosing the types of notifications they wish to receive. We will leverage a Machine Learning model to predict engine gas emissions, utilizing measurement devices connected to the engines This is a large-scale application with big data involved, so we need to work on the architecture and infrastructure as well to make this software successful. We will try our best to provide you with the best solution, including AWS cloud services necessary for this software, with a monthly AWS cost of $500(this is challenging, we are not promising, but we will try). Key Features 1. Login & Signup System a. Implement a secure and user-friendly login system to authenticate users. b. Develop a signup process to allow new users to create accounts c. Integrate an email verification mechanism to confirm the authenticity of user-provided email addresses. d. Send a verification email to the user with a unique token or link for account activation. e. Utilize industry-standard encryption techniques to protect user credentials. f. Implement secure password storage practices, such as hashing and salting. g. Use a secure backend framework to handle user authentication and authorization. h. Implement secure token (JWT) generation for email verification 2. Assets Module a. Asset Overview i. Displaying all assets of the organization. ii. Each asset entry will include details such as asset name, location, type, and connected engines. iii. Clear representation of total fuel consumption and total CO2 emissions at the asset level as well as organization level. iv. Display of the number of users currently working with each asset, offering insights into asset utilization and collaboration..
v. Pagination for assets, enabling efficient navigation through a large number of entries. b. Engine Details i. An engine subsection within each asset entry, listing all engines associated with that asset. ii. Clear representation of total fuel consumption and total CO2 emissions for each engine. c. Search Functionality i. A powerful search feature allowing users to find specific assets or engines based on their names. ii. Quick and intuitive search results for enhanced user experience. d. Sorting and Filtering i. Sorting options based on asset or engine names for easy organization. ii. Filter options including location and types to quickly narrow down relevant information e. Bulk Upload of Assets i. Users can download a sample CSV file with the required format. ii. Fill in asset details offline and upload the completed CSV for quick addition of multiple assets. f. Asset Management i. Add new assets with relevant details directly through the dashboard. ii. Edit existing asset details for updates and modifications. iii. Delete assets that are no longer in use or relevant. iv. Add engines under each asset for a detailed representation. g. Asset Analytics i. Average fuel consumption graph for the asset over a selected period. ii. Total engines' fuel consumption displayed individually and collectively. iii. Load graph for each engine to monitor performance. 3. General Settings and Notifications a. User Account i. Users can update their personal information. ii. Change passwords and manage authentication details. b. Report Configuration i. Configure automated workflows to create reports automatically ii. Configure automated reports workflows to run at predefined intervals (e.g., every month, every two months, every 15 days)..
c. Notification Settings i. Configure alerts and notifications based on user preferences. 4. Reporting Module a. View and Download Reports i. View all generated reports in a centralized location. ii. Apply filters based on date range, report type, asset location, and specific assets. iii. Pagination for organized viewing of multiple reports. iv. Download reports in CSV and PDF formats. b. Create Manual Report i. Users can generate on-demand reports with customized filters. ii. Options to choose specific assets, date ranges, and report types. c. Create Automated Reports i. Reports will automatically created based on report configurations 5. Engine Set Up a. Add engines under each asset. b. Edit existing engine details for updates and modifications. c. Delete engines that are no longer in use or relevant. 6. Billing a. Displaying all invoices b. Charge customers based on the number of assets. c. Variable per-asset fees based on client agreements. d. View and download invoices directly from the Billing page 2. Obstacles ● Unforeseen technical limitations, such as hardware constraints or compatibility issues with our software, may pose challenges. Regular testing and contingency plans are essential to address these limitations promptly. ● Consuming third-party APIs for implementing the graph or potential analytics may introduce constraints. It's important to note that any obstacles in our collaboration with them could potentially lead to increased project costs. ● Frequent scope changes might impact project timelines and overall coherence.
● Security measures are crucial but may encounter challenges in striking the right balance between data protection and user accessibility. There is no information on which data needs to be shown for which users. ● Ensuring seamless integration of diverse modules may present technical complexities. For example, since we are using AWS SageMaker, we might face some challenges when interacting with our software. ● There can be a variety of notification-related elements. It's crucial to specify the type of notifications, as this can have a significant impact on the budget 3. Budget fluctuations ● As per provided design, in analytics we have five types of asset analytics so the other analytics might be affected by the cost. It depends critically on what analytics can be in “other analytics”. ● All the key features mentioned above, in addition to any other features that may impact the budget. ● Notification type is not clear yet so this can have a significant impact on the budget. 4. Suggested Tech A. Database When working with big data from sensor devices, the choice of a NoSQL database for analytics, graphs and comparison because there is alot data involved, and the specific requirements of your use case. MongoDB and DynamoDB can be suitable for handling diverse data types and providing scalability. As per our nature of software I would suggest using MongoDB because MongoDB has primary and secondary nodes options to avoid blockage. ● Primary Node ○ The primary node is the main node that receives all write operations. It is the only member of the replica set that can accept write operations. ○ It handles all write operations and replicates the changes to the secondary nodes. ● Secondary Nodes ○ Secondary nodes replicate data from the primary node asynchronously. ○ They can be used to distribute read operations, providing scalability and fault tolerance..
○ In case the primary node fails, one of the secondaries can be automatically promoted to the primary role to ensure continuous operations. B. Backend In our gas emission monitoring solution software, a major part is CPU-bound, so there will be a lot of processing involved. So we can consider .NET Or Python BUT if we use Azure Functions with a microservices is more preferable. a. WHY .NET i. .NET provides a robust and built-in authentication mechanism, such as ASP.NET Identity, making it easier for developers to implement secure user authentication without having to build it from scratch. This not only saves development time but also ensures a standardized and secure authentication process. Utilizing the built-in authentication features in .NET can contribute to cost-effectiveness as it reduces the need for extensive custom development or third-party solutions ii. .NET, along with the ability to quickly add OpenID support through libraries, contributes to a cost-effective and efficient development process, enabling developers to focus on building core functionality while leveraging established solutions for authentication iii. We will utilize Azure Functions with a microservices approach, ensuring that each function operates independently without dependencies on others. In Azure Functions, we are not constrained to a specific programming language such as .NET, Python, or Java, allowing flexibility in our choice of programming language. When adopting a microservices approach with Azure Functions, we typically design our application as a collection of small, independent, and loosely coupled services. iv. Data collection may be a separate app can be .net or node but depends if it requires authentication then better will be .net but will suggest a separate app to be scaled v. .NET has excellent support for multi-threading through the Task Parallel Library (TPL) and asynchronous programming features. C#.
provides easy-to-use constructs for managing threads and parallel processing. Our software is heavily CPU-bounded and requires extensive parallel processing, a language with good support for multi-threading and/or multiprocessing (such as .NET/C# or Python with multiprocessing) may be more suitable. .NET applications can achieve high performance, especially with the introduction of features like async/await and the Task Parallel Library. b. NODE JS i. For data processing, we will use AWS lambda which support NODEJS as well C. Frontend a. For frontend we will suggest using React or Angular as per our project Nature. Both React and Angular are capable of delivering high-performance applications. React is a library and normally best for small scale applications and Angular is like a framework NOT a library and it's normally used for enterprise applications. D. Version Control a. We will use GITHUB for Version Control, ensuring efficient collaboration, tracking changes, and maintaining project integrity. E. Task/Issue Tracking and Documentation For issue and task tracking, we will use ClickUp, a robust platform that not only streamlines project management but also provides a centralized space for documentation..
F. Testing In this phase we will only do the functional testing. That evaluates the application's functionality against specified requirements to ensure that it meets the intended business logic and user expectations. Unit testing, load testing, and Regression testing will also be a part of the current phase. G. Continuous Integration and Continuous Deployment (CI/CD) Set up CI/CD pipelines to automate the testing, building, and deployment processes using DevOps or AWS CodePipeline. AWS CodePipeline is preferable because we are going to use most of the AWS services. 5. Timeline The estimated development time included testing and development in 4 Months. 6. Cost Estimate 1 Solution Architecture Consultant $4,000 2 Backend Engineers (4 Months) $32,000 1 Front End Engineers (3 Months) $12,000 2 QA Resource $16,000 The total tentative cost will be $64,000..
7. Required Team 1. Solutions architect consultant 1 resource 2. Backend Engineer 2 resources 3. Frontend Developer 1 resource 4. QA 2 resource NOTE: Suggested tech stack can be changed after a final discussion with the client before implementation. We have some questions that are not blockers for this proposal, so we will discuss them at the time of implementation Signed as accepted by client: Steve Manz 25-JAN-2024.