5 minute read

ENGINEERING

Next Article
CONSTRUCTION

CONSTRUCTION

ENGINEERING Fluor Corporation

Chemicals Project Named Project of the Year by Engineering News-Record

By Subcontractors USA News Provider

Fluor Corporation announced today that MEGlobal’s worldscale 750,000 metric-ton-perannum monoethylene glycol and diethylene glycol facility built in Oyster Creek, Texas, was named Energy/Industrial Project of the Year for the Texas and Louisiana region by Engineering News-Record (ENR). Fluor served as construction contractor on the project which began operations in October 2019 and marked the first time that MEGlobal’s parent company, the EQUATE Petrochemical Group, constructed a new ethylene glycol facility in the United States. The project was completed safely, ahead of schedule and under budget.

“Congratulations to our client MEGlobal and the project team for this well-deserved industry recognition,” said Mark Fields, president of Fluor’s Energy and Chemicals business. “Because of your tireless efforts, together we were able to deliver this world-class project safely, ahead of schedule and under budget – no small feat for a project of this size and complexity.”

“This was an outstanding project in every sense and we are incredibly proud to be recognized for the teamwork, safety commitment and operational excellence demonstrated by the entire team,” said Clarence Stadlwieser, MEGlobal project director.

The project, along with other eligible entries, was reviewed by an independent panel and judged based upon the following criteria: overcoming challenges and teamwork, safety, innovation and contribution to the industry/ community, construction quality and craftsmanship, and functionality of design and aesthetic quality. It was one of just 18 selected from more than 130 submissions.

The winners of ENR’s Best Project Awards will be celebrated at a ceremony in Houston on October 23.

About Fluor Corporation About Fluor Corporation

Fluor Corporation (NYSE: FLR) is a global engineering, procurement, fabrication, construction and maintenance company with projects and offices on six continents. Fluor’s 47,000 employees build a better world by designing, constructing and maintaining safe, well-executed, capital-efficient projects. Fluor is ranked 181 among the Fortune 500 companies. With headquarters in Irving, Texas, Fluor has served its clients for more than 100 years. For more information, please visit www.fluor.com or follow Fluor on Twitter, LinkedIn, Facebook and YouTube.

Source: Fluor

IT & TECHNOLOGY AI is essential Why unbiased to building a better working world

By Subcontractors USA News Provider

It’s crucial for the long-term development of AI that the technology is perceived as fair. Otherwise, trust in AI may be lost for a generation.

The COVID-19 crisis placed unprecedented strain on social contracts around the world. The pandemic’s disproportionate impact on minority and underprivileged communities, both in terms of health and economic costs, has exposed systemic inequalities along racial and ethnic lines in many societies.

Public anger is rightfully directed at the status quo, creating near-term risks for companies navigating this volatile social climate. Amid the global Black Lives Matter protests, businesses are being forced to reassess their social responsibilities, including ensuring the deployment of artificial intelligence (AI) technologies is fair and unbiased.

IBM recently announced that it will no longer offer facial recognition or surveillance technology due to concerns over bias, for example. Microsoft and others have placed a moratorium on facial recognition collaborations with law enforcement agencies. This is not only demonstrating ethical leadership, but also astute risk management.

Addressing the blind spot of algorithmic bias Addressing the blind spot of algorithmic bias

Algorithmic bias refers to repeated and systematic errors in a computerized system that generates unfair outcomes, including favouring one group over another. Bias can derive from a range of factors, such as the design of the algorithm, the “training data” used as inputs, or via unanticipated applications, among others.

EY’s survey in conjunction with The Future Society “Bridging AI’s trust gaps” (pdf) completed in late 2019, just before the arrival of the novel coronavirus, asked policymakers and companies to prioritize ethical principles across a range of AI use cases. The results highlight the biggest divergence in ethical priorities between companies and policymakers across applications concerned

the principle of fairness and avoiding bias. Further, we specifically asked companies and policymakers about two topical use cases, facial recognition and surveillance. The chart below shows the ten biggest gaps in ethical priorities between policymakers and companies out of more than 100 measured in total across the survey. Four of the ten largest concern the principle of fairness and avoiding bias (highlighted in grey). The gap around bias for surveillance and facial recognition applications, particularly relevant in the post COVID-19 world, represent the fourth and fifth biggest in the entire data set. The data indicates many companies do not appreciate the importance that policymakers and the public are placing on bias, and are failing to identify, assess, and manage risks arising from the use of potentially discriminatory algorithms. Consequently, companies may be developing products and services that will generate little traction in the market, as they are poorly aligned to emerging values, preferences, and regulatory guidance.

These risks require active mitigation beyond moratoria, and recent corporate actions suggest the gaps may be closing. Companies should consider going further by exposing both their models and governance frameworks to broader scrutiny, such as external independent auditors or even the general public. A close examination of model training data is also necessary, as bias can seep in inadvertently. For example, while excluding observations describing race or ethnicity might appear to eliminate the risk of bias, if the real world is segregated by post code, then one’s address can reveal sensitive characteristics, and generate biased outcomes despite good intentions.

Businesses must consider the unintended consequences for society, not just achieving technical accuracy. Pandemics historically are associated with racism, xenophobia, and class conflict. Haphazard deployment of discriminatory algorithms may worsen a tragic situation. Key considerations for the development of unbiased AI Key considerations for the development of unbiased AI • How do you design an AI system that is unbiased if our societies are systemically and institutionally biased? • What are the implications for our society if companies deploy biased algorithms into a world already rife with discrimination? • How can we ensure emerging technologies are not deployed unfairly against vulnerable groups? • What steps should companies take to minimize the risk of algorithmic bias? • What rights should a victim of algorithmic discrimination have to redress unfairness?

Source: EY

This article is from: