| Selvam - Big Data Eng |
| [email protected] |
| Location: Atlanta, Georgia, USA |
| Relocation: |
| Visa: |
| Resume file: Selvam_1759244709544.docx Please check the file(s) for viruses. Files are checked manually and then made available for download. |
|
Selvam
Databricks Certified Data Engineer Professional [email protected] 470 554 0174 WRITE-UP FROM CANDIDATE Designed and implemented scalable ETL pipelines on Databricks using Apache Spark (Scala) to process and transform max app client events of structured and semi-structured data from delta source Automated data workflows using Databricks Workflows and integrated with CI/CD pipelines (Azure DevOps/GitHub Actions) for streamlined deployment and testing. Optimized Spark workloads via partition tuning, caching strategies, and broadcast joins, reducing job runtime by 30 40% and improving cost efficiency. Developed and orchestrated end-to-end ETL workflows using Databricks Workflows and integrated deployment pipelines with CI/CD tools (Azure DevOps, GitHub Actions). Collaborated closely with Data Science, Analytics, and Product teams to provision clean, reliable datasets for advanced analytics and ML model training. Provided technical mentorship to junior engineers and led code reviews, design sessions, and performance tuning efforts across the team. Designed and developed Data Ingestion framework for the real time streaming data from Kafka, using Apache Druid, Scala/Spark, PySpark structured streaming, and scheduled with Airflow workflow, and implemented data pipeline using Druid for real time analytics. Developed detailed programming logic for Data computation layer using Azure Databricks, PySpark/Spark/Scala API using DataFrame and DataSet with respective transformation and action operations. Implemented and extracted the source Hive data using Spark DataFrameReader API and target data using OJAI API and join the DataFrame with Broadcast join, transformed by cleaning and enhancing the Dataset to execute and store into Mapr DB from the Spark Executors by utilizing Parallel processing. Designed and developed ETL application using the EMR cluster to decouple the storage and computation clusters. Developed integration application using Scala, PySpark, Java and Kafka Producer/Consumer API with ELK Stash and used Jenkins with GitHub, Maven/Gradle build, JFrog for Artifactory, and uDeploy for deployment working in Agile Scrum methodology using Jira and Confluence tools. Implemented CICD pipeline for the spark and Airflow components in building and deployment the artifacts to the NAS Unix directory using Jenkins PROFESSIONAL SUMMARY Certified Databricks Data Engineer Professional with hands-on experience in building scalable data pipelines, optimizing Spark workloads, and managing data lakes on the Lakehouse platform. Dynamic and results-driven IT professional with 19 years of overall experience, including 8+ years of relevant experience as a Senior Big Data Engineer. Proven expertise in leading and architecting data-intensive applications using the Hadoop ecosystem, Big Data analytics, and cloud-based data engineering solutions. Strong background in designing and developing data warehouses and large-scale systems using Scala/Spark, PySpark, Python, and Java. Leading and hands-on experience in Databricks, Spark, Python, Scala, Big Data Platform, Apache Druid, AWS cloud, Microservices, No SQL DB, Hive, HBase, Kafka, JMS, JAX-WS and JAX-RS web services, Java/J2EE, Microservices using Spring Boot and cloud, XML, Shell scripting, Spring, Hibernate, JDBC and RDBMS, Jira, Confluence and CICD using Git, Jenkins, JFrog artifactory and uDeploy, GIT Acrion. Deep understanding on Media and Entertainment, Retail, Banking, Capital markets and Insurance domains, IT process with good experience of client interactions and handling makes it a complete package. Hands-on in Data pipeline with Databricks Spark with Scala and Python and Apace Druid, ETL with SCD2 type experience with expertise in Requirement gathering and Analysis, Design, Development, Implementation, Modeling, Testing, and support for Data warehousing applications using components Spark, HDFS, MapR, Yarn, Hive, Sqoop, Autosys, Mapr_DB, Mongo DB, HBase, and Zookeeper. Good Knowledge and Experience in Amazon AWS like EC2, S3, ECS, EFS, EMR, ELB, EKS, SNS, Lambda which provides fast and efficient processing of Big Data. Experience in Serverless Technologies like Lambda and CICD using AWS CodePipeline, CodeCommit, CodeBuild. CodeDeploy Extensive experience and actively involved in Requirements gathering, Analysis, Design, Coding and Code Reviews, Unit and Integration Testing. Expertise in working with ETL Architects, Data Analysts, and data modelers to translate business rules/requirements into conceptual, physical, and logical dimensional models and worked with complex normalized and denormalized data models, expertise on data extraction, transformation and load in Hive, Mapr-DB and HBase and experience with data transformation from HDFS, HIVE, Mapr-DB, HBase, and Oracle Good exposure to each of the phases of Software Development Life Cycle (SDLC), developing projects from stage of concept to full implementation. SKILL SET Big Data Ecosystem Databricks, Delta Lake, Spark, Apache Druid, PySpark , HDFS, Hive, Pig, Flink, HBase, Mapr DB, Sqoop, Oozie, Airflow, Kafka, Spark and Zookeeper, Apache Druid, Hadoop, Hive, Iceberg, Presto, Trino, Redhat Ceph, Open IO, Scality Ring, ORC, Parquet Hadoop Distributions Apache Hadoop 2.x/1.x, Cloudera CDP, Hortonworks HDP, Amazon EMR (EMR, EC2, EBS, RDS, S3, Elasticsearch ), Azure HDInsight. Programming Languages Python, Scala, Java, SQL, HiveQL, PL/SQL, UNIX shell Scripting, Groovy Tools IntelliJ, PyCharm, Azure Databricks, Kafka, Offset client, Jupyter Notebook, IBM-ESB, Eclipse 3.2, MyEclipse, RAD 7.5, NetBeans 6.7, TibcoGI, IBM WID for SOA, SOAP UI, Putty, FileZilla, WinSCP, PL\SQL Developer, TOAD 8.0, Hue, SPLUNK, Jira & Confluence. Databases SQL Server, Oracle 10g & 11g, MySQL, DB2, Teradata, PostgreSQL, Apache Druid NoSQL Databases HBase, Cassandra, Mongo DB, Mapr DB, Devops Tools Jenkins, Docker, Maven, Gradle, SBT, JFrog, UDeploy, RIO, Jenkins, CodeCommit, CodeDeploy, CodePipeline Cloud AWS EC2, VPC, EBS, SNS, RDS, EFS, EKS, Lambda, EBS, S3, Autoscaling, Cloud Watch, GCP Storage bucket, HDInsight, Cloud monitor Apache Spark, HDInsight Apache HBase, Azure Data Lake Storage, Snowflake Version Control Git, SVN, Bitbucket Operating System Mac OS, Windows 7/8/10, Unix, Linux, Ubuntu Web Technologies REST Web services, Web Service (JAX-WS using Metro), REST Swagger UI, J2EE Design Patterns & Design Principles, Log4j, JDBC, JSP, Servlets, SOA, Spring, Spring Boot, Struts 2, JPS, Hibernate 3.0, MyBatis Application Servers Apache Tomcat, Web sphere, Weblog, JBoss Methodologies RAD, JAD, UML, System Development Life Cycle (SDLC), Jira, Confluence, Agile, Waterfall Model PROJECT EXPERIENCE Client: WBD Max, Alpharetta, GA Sep 23 to Till Date Role: Senior Data Engineer (Databricks) Description: The Product Engagement Data Engineering (PEDE) team is dedicated to leveraging data and technology for enhancing user experiences and optimizing product performance by developing data-driven solutions to inform analytics and technical decisions. The Product and Data teams plan to collect user engagement data to enable reporting functions that inform dashboard creation, and analytical exercises to better understand and predict how users interact with the Max Product. These insights will be tracked across .COM, in-app, web, mobile, CTV, gaming consoles and set-top boxes experiences and will include (but not limited to) engagement data across product, profiles, viewership, and interactions To provide insights into Product Engagement across platforms, features, capabilities, content, and times so that we can better understand how our audience interacts with and views content on the Max product. Responsibilities: Developed and optimized scalable data pipelines to support real-time and batch processing of streaming content metrics, user engagement, and viewership analytics across the Max platform. Engineered robust ETL workflows using tools like Apache Airflow, Spark, and Databricks workflow to ingest and transform multi-terabyte datasets from diverse sources (e.g., playback logs, CDN metrics, subscription data). Collaborated with product analysts, and content strategy teams to deliver clean, curated datasets that power personalized recommendations, A/B testing, and content performance dashboards. Ensured data integrity and compliance with privacy regulations (GDPR, CCPA) by building PII masking and access control mechanisms. Optimized storage and compute costs in cloud platforms (AWS) by partitioning, compression, and data lifecycle management policies. Design and develop platform for creating the data ingestion pipelines and create CI CD for the pipeline that can be re-used for any application that getting onboarded to the analytics platform. Performed performance tuning on Databricks jobs (Spark applications), achieving improved job execution time and reduced resource consumption by implementing fair pool using fair schedular. Refactored Spark code and configuration settings to optimize processing of large datasets in distributed environments and extracted tenant-agnostic logic into reusable modules to support a scalable multi-tenant architecture. Technology and Environment: Databricks Spark, Scala, AWS,S3, Delta Lake,Snowflake, Maven, SBT, Kafka, Airflow, Delta Storage & Tables, Hive, Postman, Github Actions, Shell Script, GitHub, Looker, Vault management, ELK, Monte carlo Client: CNHi, Charlotte, NC Nov 22 to Aug 23 Role: Senior Scala/Spark Developer (Design/Developer) Description: GeoSpatial Storage (GSS) process and stores data from CNHi-owned and third-party devices in a near real-time manner enabling geospatial queries and geo-analytics. Every pipeline for Telematics, Tierra, Machine Health, CE ATT, EdgeX will have a Live and backfill jobs, using Apache Flink, Apache Druid and with Spark. File feeds from SDP are processed and ingested to Data lake using Spark ETL Jobs implemented within Azure Databricks. GSS has the following objectives, Create one source of truth in the data, stream and store live data from IoT devices, support a near real-time data stream, enable customers to view their field data in a timely manner, enable data access to internal CNHi business units, enable the creation of engineering and data science pipelines and GeoSpatial queries on the data, enable support for rules set based on telemetry data(rule engine). Synthetic pipeline being used to write the processed Telemetry from process event hub to Event hub output that is the serving layer for Analytics team. Responsibilities: Involved in design, development in the Data Telemetry pipelines framework using various Bigdata tools like Spark, Azure Databricks, Apache Druid on Azure cloud, bench marked the performance and implemented the real time, back and backfill, rule engine component in SCRUM methodology. Develop Spark backfill ingestion framework with common re-usable components to enhance new data ingestion pipeline across the applications for better Maintainability, Extensibility, scalability. Proper technical requirement scoping and coding/testing with Scala as an Individual contributor, shared knowledge, helped and supported all the DE team member s to achieve consistent and quality prod release. Proactively coordinated with Enablement engineering, Cloud Ops, DevOps, Hadoop admin, Network, Unix, and KITE teams whenever the necessity arises for the team for the successful delivery of the project on time with a good quality. Technology and Environment: Spark, Scala, Flink, Apache Druid, Databricks, Azure HDInsight, Azure Data Lake storage Gen 2, Azure Blob Storage, Event hub, Service Bus, Maven, SBT, Hadoop, HDFS, Kafka, Delta Storage & Tables, Hive, Postman, Azure pipeline, DevOps, HBase, Shell Script, Bitbucket, Apache Superset, Power BI, Vault management, ELK, Kibana, Nagios, Splunk, Graffana. Client: Lowes, Charlotte, NC Oct 21 to Nov 22 Role: Senior Pyspark/Apache Druid Developer (Sr Data Engineer - Design/Developer) Description: Ingestion pipelines developed in Data Analytics platform for the data emitted by various applications in suite of applications, this allows the retailer s telemetry events and backend data that are required for the analytics team to build dash boards and reports for the business insights and informed decisions, this is to measure and report metrics and KPI that are crucial to the business, using Superset and Power BI, to understand the user behavior and experience in stores, and business timely decisions the real time events has to be ingested into Hadoop eco system using Spark and Apache Druid application. Application team publishes the events to the Kafka topic and the Spark/Druid ingestion consumes the streams as Structured streams and records the offsets and partition in checkpoint and the data is stored in Hadoop as ORC/Parquet format and external Hive table is created and exposed for the reporting process. Responsibilities: Involved in design, development and testing phases in the Data pipeline framework using various Bigdata tools like Spark, Apache Druid, Hive, bench marked the performance and implemented the real time components. Involved in setting up the platform for the DE team to build and deploy data ingestion pipeline, by coordinating with Platform, KITE, Network, Hadoop/Druid Admin and Jenkins teams. Develop Apache Druid and PySpark ingestion framework with common re-usable components to enhance new data ingestion pipeline across the applications for better Maintainability, Extensibility, scalability. Co-ordinated with Data analyst s and application stakeholders to understand the data schema for the quality data to be ingested for effective reporting. Design and develop platform for creating the data ingestion pipelines and create CI CD for the pipeline that can be re-used for any application that getting onboarded to the analytics platform. Involved in Airflow PySpark integration, Airflow Slack integration, designed and developed Dags for Spark streaming micro batch, data replication and refresh periodic jobs. Prepared and standardized Bitbucket branching strategies, Jenkins configuration, code commit, build and deployment process for the team and documented in Confluence. Proactively coordinated with Airflow, Hadoop admin, Network, Unix, and KITE teams whenever the necessity arises for the team for the successful delivery of the project on time with a good quality. Technology and Environment: Python, PySpark, Apache/Strimzi Kafka, Spark, Scala, Hive, Apache Druid, Presto, Trino, GCP, Great expectation, Postman, Apache Airflow, oozie, DevOps, Jenkins (for Integration), Mongo DB, Shell Script, Bitbucket, Jenkins, Apache Superset, Power BI, Vault management, Nagios, Splunk, Graffana. Client: Wells Fargo., Charlotte, NC Nov 19 to Sep 21 Role: Senior Spark/Scala - Design/Developer) Description: Bitemporal process framework allows the Financial Institution to track Financial critical risk database documents along two-time axes simultaneously. It lets the firm to keep track of when an event occurred (the valid time), as well as when the data was entered into the database (the system time). Scala and Spark being used extensively to develop the application, Scala/Spark API to handle volumetric data with Spark for faster testing and processing of data. Responsibilities: Involved in design, development and testing phases in the Bitemporal component using various Hadoop and ETL tools initially for the POC using Scala API, bench marked the performance and implemented the batch processing component. Involved in developing multiple POC to identify appropriate NoSql DB using Mongo DB, HBase and MaprDB storage for high throughput and providing low latency interacting with multiple stake/system holders for successful delivery of the project. Engaged with Data Analysts and data modelers to translate business rules/requirements for MBDA data Automation framework Autobot and worked with complex normalized and denormalized data models. Re-designed the Bitemporal process by decoupling the Data segregating and pre-processing from the core Bitemporal invocation and process. Involved in preparing Design document and engaging in developing Data Ingestion framework using Spark and Scala. Evaluated and prepared design and developed for MBDA data validation automation framework for schema and data validation from different zones. Reviewed and standardize peer s code, peer SIT testing and designing complex requirements. Proactively coordinated with BA s, QA and Platform enablement and support whenever the necessity arises for the team for the successful delivery of the project on time with a good quality. Technology and Environment: Spark, Scala, Sqoop, Python, MaprFS, Mapr DB, Hive, Solace, Microservices, Spring, Java 1.8, Spring Boot, REST Swagger UI, Postman, Autosys, DevOps, Jenkins (for Integration), UCD (for deployment), Gradle, Maven, HBase, S3, Mapr DB, Mongo DB, REST Services, Shell Script, Git, Gradle, Maven, JFrog, UDeploy, Jenkins, Blackduck & Checkmarx scan. Client: Wells Fargo., Bengaluru, India. Jul 16 to Oct 19 Role: Specialist (Bigdata Hadoop Spark/Scala/Python Developer) Description: DIF (Data Ingestion Framework) also known as The Enterprise Data Lake bring together data from all the LOB s of Wells Fargo, internal and external, and the data points like Structured and Unstructured data into a centralized horizontal platform. It enables three major categories of use case namely Data warehousing, searching and advanced analytics from one platform using Big Data Hadoop technologies. DIF is based on Microservices architecture, it simplifies the delivery of independently packaged and deployed application units as part of a larger application. Responsibilities: Engaged in development of Ingestion and Distribution services using Pyspark/Scala & Spark. Engaged in Data ingestion using Sqoop to import Teradata to HDFS with Terada JDBC connectors. Engaged in ETL framework development with multiple Data provider & consumer and multiple format data using Scala/Spark. Designed and developed and analyzed Use Cases, Activity Diagrams, Sequence Diagrams, using MS Visio Interacted with the Users, Designers, Developers, SME and Project Manager to get a better understanding of the business processes and environment. Prepared detailed functional and technical specifications from which programs will be written. Created fully automated Continuous Integration (CI) build infrastructure for multiple projects using Git, JFrog, Jenkins and uDeploy. Experience in developing Micro services using Spring Boot, Netflix OSS (Zuul, Eureka, and Ribbon). Strong Hands-on Experience with Apache Kafka for communications between the Micro-Services and components. Knowledge of Spring Boot based applications and Micro services-based applications using spring and cloud tools such as Eureka, Zuul Proxy, Ribbon, hystrix etc. Engaged in Data analytics using Spark SQL by deriving and transformation using Spark API s (DataFrameReader & DataFrameWriter). Involved in development/enhancement of Core DIF-Enterprise Data Lake framework. Technology and Environment: Big Data Hadoop, Spark, Scala, Python, Pyspark, Sqoop, HDFS, Java 1.8, MySQL, Teradata, JDBC, HBase NoSQL, Hive, Drill, Kafka, Zookeeper, Microservices, Spring, Spring Boot, Autosys, DevOps, Jenkins (for continuous Integration), Maven, REST Services, Shell Script, Git, Gradle, Maven, JFrog, UDeploy, Jenkins. Client: J P Morgan., Bengaluru, India Feb 14 to Jun 16 Role: Senior Associate Description: eCLIPS (Electronics Commercial Loan Initiation and Processing System) is a middleware which interacts with different subsystems for inputs which is required by loan processing systems. The primary functionality of eCLIPS is to facilitate all the users and system partners to initiate and post loan transactions to the LIQ (Loan IQ) system. Then eCLIPS validates the transactions and post them to Loan IQ system via LIQWS (Loan IQ Web service). Responsibilities: Leading the scrum meeting, Sprint Planning and execution, Sprint Review, and retrospective in a highly agile team. Managing a team of 4 members as a technical lead. Proper technical requirement scoping and coding with Java Collections, Multithreading & Memory management as an Application Developer. Involved in business setup interims of generating knowledge about the current business process, design current business flows, study current business processes and its complication. Reviewed and standardized peer s code designed complex requirement and written High level Design and Low-level Design as a Designer. Involved in development/enhancement of Core eCLIPS framework. Scratch development of Customer, Deal and Facility Migration process from ACBS to LIQ (Loan IQ) loan processing system and performed Data collection and Data ingestion into Hadoop. Actively monitored and resolved production defects. Technology and Environment: Java 1.6, Web Services (JAX-WS) using Metro implementation, JAX_RS using Jersy implementation, Scala/Spark, Hadoop/HDFS, Hive, JMS, Hibernate 3, Spring 2.5(IOC, AOP, MVC, ORM), MyBatis, JSP, Servlets2.5, Apache Tomcat server 7, Oracle 11g, IBM MQ, Oracle 11g, JUnit, Quartz Scheduler framework, XML/XSD. Client: Scotia Bank and PWC-SDC, Bengaluru, India Oct 11 to Jan 14 Role: Senior Technical Lead Description: This project deals with the integrating and implementation of Guide wire PolicyCenter application for one of the reputed Amica insurance US firms. The Guidevwire PolicyCenter product is developed specifically for the insurance companies to make the Policy automation processing and external systems interaction easier. Responsibilities: Leading a team, helping the team members to complete their work and update the management and client about the work status on daily basis. Designed and developed REST service and SOAP services using JAX-RS and JAX-WS API with Digital signature Integrates external systems to claim center as per the enhancements and the bugs. Meetings with Business Analyst to resolve any functional queries. Resolve any technical/functional queries that the team encounters. Deploy the completed deliverables to be tested in different environments. Technology and Environment: Java 1.6, Web Services (JAX-WS), REST (JAX-RS), Guide ware 7.0.3, Apache Tomcat server 6.0.35, Maven, Multi-threading (Java Executor framework), Junit, Mockito, Postman, Oracle 11g, Eclipse IDE 3.6.2, Achievements: Received spot award for proactively worked on POC and implementation of secured JAX-WS web services for Guide ware Claim services, to interact with Policy and Fraudulent check services. Client: DELL Services, Bengaluru, India Aug 10 to Oct 11 Role: Associate Description: The Odyssey is a mature, complex, and highly stable car rental billing system being used by Enterprise Holding Inc. Odyssey has served its purpose well. However, changes in the business environment have diminished the cost-effective use of this tool. Also, there is difficulty in finding and securing technicians to support Odyssey. Responsibilities: Estimation and project planning for the User Requirement Team Handling and tracking of work to completion. Involved in enhancement features of this Project and production support. Coding, FSD and LLD preparation Technology and Environment: Java, Spring, J2ee, JMS, JPS, Maven, SOAP Web services, WebSphere Application Server 5.1/ 6.0, MQSeries 5.3, Oracle 9i, Lotus Domino 6.0, MS Active Directory, UNIX, Linux Servers. Client: IBM, Bengaluru, India Dec 09 to Jul 10 Role: Senior System Engineer Description: Software Quality Assurance (SQA) involves reviewing and auditing the software projects and activities to verify that they comply with the applicable procedures and standards and providing the software project and other appropriate managers with the results of these reviews and audits. Responsibilities: Involved in enhancement features of this Project Involved in production and application support Team Handling and tracking of work to completion Technology and Environment: Java 1.6, Struts, Spring, Hibernate, WebSphere Application Server 5.1/ 6.0, MQSeries 5.3, DB2, Quartz, Maven. Client: Deutsche Bank, Bengaluru, India Dec 07 to Nov 09 Role: Software Engineer Description: Trade Order Entry (TOE) is a web application used by DB Dealers to capture the trade details, which will be sent to EPIM (European Pre-Issuance Messaging System) for validation. EPIM will assign EPIM reference number, returns the DIR to TOE and forwards it to Deutsche Bank as the IPA. The request is processed by SPIDER application in DB. Responsibilities: Estimation and project planning for the User Requirement Team Handling and tracking of work to completion. Project process improvement activities and client interaction Coding and development in multithreaded, FSD and LLD preparation Technology and Environment: Java, Servlets, Webservices (Jax WS), JPA, Hibernate, JMS, Ajax, SQL Server, WebLogic 10, Maven build. Client: Aegon Insurance, The Hague, Netherlands Jan 07 to Nov 07 Role: Software Engineer Description: The AEGON Integration Layer (AIL) is proposed to be the central integration layer across all AEGON, Netherlands business units. AEGON has taken a strategic decision to do away with the multiple integration layers and create a single framework based on a common platform. Responsibilities: Involved in requirement gathering for the Business services and integrating with external web services at Onsite (Netherlands) Preparation and review of High-Level Design, Low Level Design. Supported other modules by providing solutions for technical implementation. Involved in testing, application deployment and production support. Technology and Environment: Java, IBM - ESB, Java, Web services, IMB WebSphere Integration Developer (WID), WPS, SOAP Client: Boeing Inc, Chennai, India Jan 06 to Jan 07 Role: Software Engineer Description: The Flight Test Computing System (FTCS) is a computing platform used to ensure that every airplane produced by the Boeing Company meets U.S. and foreign government certification standards. The FTCS application assists the Flight Test Organization. Responsibilities: Involved in Requirements gathering, study of Flight-Testing System, Test Planning application enhancements and provide technical implementation solution for modules at onsite (US). Preparation and review of High-Level Design, Low Level Design for system enhancements. Involved in the development of client controller and UI (Screens) in Swing, for Instrumentation Records (IR) module and provided technical implementation solution for other modules. Coordination with the Boeing experts on clarifications, technical solutions for implementing various business rules and deviations taken by the offshore development team from the functional specifications. Technology and Environment: Java, Servlets, JSP, EJB 2.0, Swing, XML, Struts, Oracle 9i, Oracle10g, ITEXT 1.3, WebSphere, JBoss 3.2.3 EDUCATIONAL QUALIFICATIONS Master of Computer Application (MCA) from Bharathidasan University, Tamil Nadu, India CERTIFICATIONS Databricks Certified Data Engineer Professional Databricks | Issued September 2025 Verification : https://credentials.databricks.com/fda907d1-8aa3-4b13-9000-0bab76c179c7#acc.9uYq1yIG Keywords: continuous integration continuous deployment quality analyst business analyst machine learning user interface message queue business intelligence sthree database information technology container edition microsoft procedural language Colorado Delaware Georgia North Carolina |