IELTS Reading: Tác Động Xã Hội của AI trong Bảo Mật – Đề Thi Mẫu Có Đáp Án

Trí tuệ nhân tạo (AI) đang định hình lại cách chúng ta nhìn nhận về quyền riêng tư và an ninh thông tin trong xã hội hiện đại. Chủ đề “What Are The Social Implications Of AI In Privacy And Security?” không chỉ phổ biến trong các đề thi IELTS Reading gần đây mà còn phản ánh mối quan tâm toàn cầu về sự phát triển công nghệ. Theo thống kê từ Cambridge IELTS và British Council, các passage về công nghệ AI và tác động xã hội xuất hiện với tần suất ngày càng tăng, đặc biệt trong các bộ đề từ năm 2020 trở lại đây.

Bài viết này cung cấp một đề thi IELTS Reading hoàn chỉnh gồm 3 passages với độ khó tăng dần từ Easy đến Hard, bao gồm 40 câu hỏi đa dạng giống thi thật. Bạn sẽ nhận được đáp án chi tiết kèm giải thích cụ thể, từ vựng quan trọng được phân tích theo từng passage, cùng các kỹ thuật làm bài hiệu quả. Đề thi này phù hợp cho học viên từ band 5.0 trở lên, giúp bạn làm quen với format chuẩn IELTS, rèn luyện kỹ năng đọc hiểu và quản lý thời gian tối ưu. Hãy chuẩn bị sẵn 60 phút để hoàn thành bài test này như điều kiện thi thật!

Hướng Dẫn Làm Bài IELTS Reading

Tổng Quan Về IELTS Reading Test

IELTS Reading Test kéo dài 60 phút với 3 passages và tổng cộng 40 câu hỏi. Mỗi câu trả lời đúng được tính 1 điểm, không có điểm âm cho câu trả lời sai. Độ khó của các passages tăng dần, với Passage 1 thường dễ nhất và Passage 3 khó nhất. Điều quan trọng là bạn cần phân bổ thời gian hợp lý để đảm bảo hoàn thành toàn bộ bài thi.

Phân bổ thời gian khuyến nghị:

  • Passage 1: 15-17 phút (13 câu hỏi)
  • Passage 2: 18-20 phút (13 câu hỏi)
  • Passage 3: 23-25 phút (14 câu hỏi)

Lưu ý rằng không có thời gian bổ sung để chép đáp án vào Answer Sheet, vì vậy bạn nên ghi đáp án trực tiếp trong quá trình làm bài.

Các Dạng Câu Hỏi Trong Đề Này

Đề thi mẫu này bao gồm 7 dạng câu hỏi phổ biến nhất trong IELTS Reading:

  1. Multiple Choice – Chọn đáp án đúng từ các lựa chọn A, B, C, D
  2. True/False/Not Given – Xác định thông tin trong bài là đúng, sai hay không được đề cập
  3. Matching Information – Nối thông tin với đoạn văn tương ứng
  4. Sentence Completion – Hoàn thành câu với từ trong bài
  5. Matching Headings – Chọn tiêu đề phù hợp cho mỗi đoạn
  6. Summary Completion – Điền từ vào đoạn tóm tắt
  7. Short-answer Questions – Trả lời câu hỏi ngắn với giới hạn số từ

IELTS Reading Practice Test

PASSAGE 1 – The Rise of AI Surveillance in Modern Cities

Độ khó: Easy (Band 5.0-6.5)

Thời gian đề xuất: 15-17 phút

In cities around the world, artificial intelligence is transforming how authorities monitor public spaces and manage security. From facial recognition cameras in shopping centres to smart traffic systems that track vehicle movements, AI-powered surveillance has become ubiquitous in urban environments. This technological revolution promises enhanced safety and efficiency, but it also raises significant questions about personal privacy and civil liberties.

The integration of AI into surveillance systems began in earnest during the early 2010s, when computing power became sufficient to process vast amounts of video data in real-time. Traditional CCTV cameras, which simply recorded footage for later review, have been replaced by intelligent systems that can identify individuals, detect unusual behaviour, and even predict potential security threats. In London alone, there are an estimated 600,000 surveillance cameras, many of which now incorporate some form of AI technology. Law enforcement agencies argue that these systems help solve crimes faster and deter criminal activity before it occurs.

However, the expansion of AI surveillance has not been without controversy. Privacy advocates warn that constant monitoring creates a “chilling effect” on public behaviour, where citizens feel unable to express themselves freely knowing they are always being watched. In 2019, San Francisco became the first major American city to ban facial recognition technology for use by police and other government agencies. The city council cited concerns about algorithmic bias – studies have shown that facial recognition systems are significantly less accurate when identifying people with darker skin tones, potentially leading to wrongful arrests and discrimination.

The commercial sector has also embraced AI surveillance, though often with different motivations. Retail stores use the technology to track customer movements, analyse shopping patterns, and prevent theft. Some companies have even begun using AI to monitor employee productivity, tracking everything from how long workers spend on breaks to their facial expressions during meetings. While businesses argue this data helps optimise operations and improve customer service, critics point out that workers have little choice but to consent to such monitoring if they want to keep their jobs.

In China, the government has implemented what many observers call the world’s most extensive surveillance network. The “Sharp Eyes” project aims to connect public and private security cameras into a single nationwide network, monitored by AI systems capable of tracking individuals across multiple locations. Citizens are assigned “social credit scores” based partly on their behaviour as captured by these cameras. A low score can result in restrictions on travel, employment, and access to services. Chinese authorities maintain that the system promotes social harmony and public safety, but human rights organisations have condemned it as authoritarian overreach.

The technical capabilities of modern AI surveillance continue to advance rapidly. Gait recognition technology can identify individuals by their walking patterns, even when their faces are not visible. Emotion detection systems claim to read people’s feelings from their facial expressions, though the scientific validity of such technology remains disputed. Some airports now use AI to screen passengers for suspicious behaviour, analysing factors like body language and eye movements. These developments suggest that future surveillance systems may be able to monitor not just our actions, but our thoughts and feelings as well.

Legal frameworks have struggled to keep pace with these technological changes. The European Union’s General Data Protection Regulation (GDPR), implemented in 2018, includes some of the world’s strongest protections for personal data, but it was written before many current AI surveillance capabilities existed. In the United States, surveillance laws vary dramatically by state, with some jurisdictions having virtually no restrictions on how authorities or businesses can use facial recognition technology. This regulatory patchwork creates confusion and makes it difficult for citizens to understand their rights.

Public opinion on AI surveillance remains divided. Surveys consistently show that people are more accepting of such technology when it is used for specific security purposes, such as preventing terrorism, but become uncomfortable when the same systems are used for broader monitoring or commercial purposes. Younger generations, who have grown up with social media and smartphones, often express less concern about privacy than older adults. This generational divide may influence how societies balance security and privacy in the coming decades.

Questions 1-13

Questions 1-5: Multiple Choice

Choose the correct letter, A, B, C or D.

  1. According to the passage, traditional CCTV cameras differed from modern AI surveillance systems because they:
    A. were more expensive to install and maintain
    B. only recorded footage without real-time analysis
    C. could not be used in public spaces
    D. were less reliable in detecting crimes

  2. What reason does the passage give for San Francisco’s ban on facial recognition?
    A. The technology was too expensive for the city
    B. Citizens protested against all forms of surveillance
    C. The systems showed bias in identifying certain groups
    D. Law enforcement refused to use the technology

  3. According to the text, how do retail businesses use AI surveillance?
    A. To replace human security guards entirely
    B. To monitor both customers and employees
    C. To share data with law enforcement
    D. To increase prices based on customer behaviour

  4. China’s “Sharp Eyes” project is described as:
    A. a model for other countries to follow
    B. limited to monitoring public spaces only
    C. connecting cameras into a national network
    D. focused primarily on preventing terrorism

  5. The passage suggests that current legal frameworks:
    A. adequately address all AI surveillance concerns
    B. have not kept up with technological developments
    C. are too restrictive on law enforcement
    D. are consistent across different countries

Questions 6-9: True/False/Not Given

Do the following statements agree with the information given in the passage?

Write:

  • TRUE if the statement agrees with the information
  • FALSE if the statement contradicts the information
  • NOT GIVEN if there is no information on this
  1. London has more surveillance cameras than any other European city.

  2. Facial recognition technology is more accurate for people with lighter skin tones.

  3. All employees support the use of AI monitoring in workplaces.

  4. Gait recognition can identify people even when their faces cannot be seen.

Questions 10-13: Sentence Completion

Complete the sentences below.

Choose NO MORE THAN TWO WORDS from the passage for each answer.

  1. Privacy advocates believe that constant surveillance creates a __ on how people behave in public.

  2. In China, citizens receive __ that can affect their access to various services.

  3. The scientific accuracy of __ systems that claim to identify feelings is questionable.

  4. Different laws in various US states create a __ that confuses citizens about their privacy rights.

Camera giám sát AI và công nghệ nhận diện khuôn mặt trong môi trường đô thị hiện đạiCamera giám sát AI và công nghệ nhận diện khuôn mặt trong môi trường đô thị hiện đại


PASSAGE 2 – Data Privacy in the Age of Intelligent Algorithms

Độ khó: Medium (Band 6.0-7.5)

Thời gian đề xuất: 18-20 phút

The exponential growth of artificial intelligence has fundamentally altered the relationship between individuals and their personal information. Every interaction with digital technology – from searching the internet to using smartphone applications – generates data that AI systems can collect, analyse, and exploit. This unprecedented level of data harvesting has created what scholars term “surveillance capitalism,” a new economic order where human experience is transformed into behavioural data for commercial purposes. The social implications of this transformation extend far beyond simple privacy concerns, touching on fundamental questions about autonomy, power, and democracy.

At the heart of the data privacy crisis lies the concept of informed consent. When users accept lengthy terms and conditions agreements to use online services, they typically grant companies sweeping permissions to collect their data. However, research consistently demonstrates that virtually no one reads these agreements, which are deliberately written in complex legal language and can exceed 20,000 words in length. Moreover, even if users did read them, they would face a Hobson’s choice – accept the terms or forgo access to services that have become essential for modern life. Critics argue that this cannot constitute genuine consent, as it lacks both comprehension and voluntariness.

The opacity of AI algorithms compounds these privacy concerns. Companies employ sophisticated machine learning models that can infer remarkably detailed information about individuals from seemingly innocuous data. An insurance company’s AI might deduce health conditions from social media posts and web browsing history. A bank’s algorithm could assess creditworthiness based on the phone numbers in someone’s contact list. Employers have used AI to screen job applicants, with systems making decisions based on factors that correlate with protected characteristics like race or gender, even when those characteristics are not explicitly programmed into the algorithm. The inscrutability of these “black box” systems makes it virtually impossible for individuals to understand how their data is being used or to challenge decisions made about them.

Data breaches represent another critical dimension of AI-related privacy risks. As organisations accumulate vast repositories of personal information to train their AI systems, these databases become increasingly attractive targets for cybercriminals. The 2017 Equifax breach exposed sensitive data of 147 million Americans, while the 2019 Capital One incident affected over 100 million customers. Once stolen, this data can be used for identity theft, financial fraud, or sold on dark web marketplaces. AI systems themselves can become tools for attackers – adversarial machine learning techniques allow hackers to manipulate AI systems, potentially causing them to make incorrect decisions or reveal confidential information.

The power asymmetry between data collectors and data subjects has profound social consequences. Governments and corporations possess comprehensive profiles of billions of individuals, while those individuals typically have no reciprocal knowledge of how their data is used. This imbalance enables microtargeting – the practice of using detailed personal information to deliver customised messages to individuals or small groups. During elections, political campaigns have employed AI to identify persuadable voters and deliver tailored advertisements designed to influence their opinions. The 2016 Cambridge Analytica scandal, where data from 87 million Facebook users was harvested without consent for political advertising, demonstrated how this capability could be abused to undermine democratic processes.

Vulnerable populations face disproportionate risks from AI privacy violations. Children, who are often prolific users of digital technology but lack the cognitive maturity to understand privacy implications, are particularly susceptible to data exploitation. Low-income individuals may rely more heavily on “free” online services that are subsidised by data collection, effectively creating a system where privacy becomes a luxury good. Marginalised communities, already subject to discriminatory practices, find that AI systems can perpetuate and amplify existing biases by learning from historical data that reflects societal prejudices.

Some technologists propose “privacy-preserving AI” as a potential solution. Techniques like differential privacy add mathematical noise to datasets, allowing AI systems to learn useful patterns while protecting individual privacy. Federated learning enables AI models to be trained across multiple devices without centralising data. Homomorphic encryption permits computation on encrypted data, meaning sensitive information never needs to be decrypted. However, these approaches often reduce the accuracy or capabilities of AI systems, creating trade-offs between privacy and functionality that society must collectively negotiate.

Regulatory responses to AI privacy concerns have varied globally. The European Union has positioned itself as a leader in data protection, with the GDPR establishing principles like “data minimisation” and “purpose limitation” that restrict how organisations can collect and use personal information. The regulation also enshrines a “right to explanation” for algorithmic decisions, though implementing this right has proven challenging with complex AI systems. China has introduced its own comprehensive data privacy law, though critics note that it includes exemptions for government surveillance activities. In the United States, privacy regulation remains fragmented, with different rules applying to sectors like healthcare and finance, but no comprehensive federal framework comparable to the GDPR.

Looking forward, the integration of AI into emerging technologies promises to further complicate the privacy landscape. The Internet of Things will embed sensors throughout homes, offices, and cities, generating continuous streams of behavioural data. Wearable devices will monitor biological functions in real-time. Autonomous vehicles will track every journey. Each of these technologies relies on AI to function, and each creates new vectors for privacy intrusion. Society faces an urgent challenge to develop governance frameworks that can harness the benefits of AI while safeguarding fundamental privacy rights – a task that will require ongoing dialogue between technologists, policymakers, ethicists, and the public.

Questions 14-26

Questions 14-18: Yes/No/Not Given

Do the following statements agree with the views of the writer in the passage?

Write:

  • YES if the statement agrees with the views of the writer
  • NO if the statement contradicts the views of the writer
  • NOT GIVEN if it is impossible to say what the writer thinks about this
  1. Most people thoroughly read terms and conditions before accepting them.

  2. AI algorithms can reveal private information even when it is not directly provided.

  3. Data breaches are becoming less frequent as security improves.

  4. The Cambridge Analytica scandal showed how data misuse could affect elections.

  5. Privacy-preserving AI techniques work perfectly without any disadvantages.

Questions 19-22: Matching Headings

Choose the correct heading for paragraphs C, D, E, and G from the list of headings below.

Write the correct number (i-viii) next to Questions 19-22.

List of Headings:
i. The challenge of understanding AI decisions
ii. Global differences in privacy legislation
iii. Future technologies and privacy concerns
iv. The problem with consent mechanisms
v. Security risks from data collection
vi. Unequal impacts on different social groups
vii. Technical solutions for privacy protection
viii. The role of social media companies

  1. Paragraph C __
  2. Paragraph D __
  3. Paragraph E __
  4. Paragraph G __

Questions 23-26: Summary Completion

Complete the summary below.

Choose NO MORE THAN TWO WORDS from the passage for each answer.

AI systems can use data in ways that particularly harm certain groups. Children are at risk because they use technology frequently but lack 23. __ to understand privacy issues. People with lower incomes may depend on services that are 24. __ through data collection, making privacy less accessible. For communities that already face discrimination, AI can 25. __ existing prejudices when trained on biased historical data. This situation is worsened by the 26. __ between those who collect data and those whose data is collected.


PASSAGE 3 – The Epistemology of AI Security: Algorithmic Governance and Social Trust

Độ khó: Hard (Band 7.0-9.0)

Thời gian đề xuất: 23-25 phút

The ascendancy of artificial intelligence in security infrastructure represents more than a mere technological transition; it constitutes a fundamental epistemological shift in how societies conceptualise safety, risk, and trustworthiness. Traditional security paradigms relied on human discretion and contextual judgement, with rules serving as guidelines rather than immutable directives. Contemporary AI-driven security systems, by contrast, instantiate decision-making processes through algorithmic logic that is simultaneously deterministic and inscrutable. This transformation has profound implications for social cohesion, political legitimacy, and the very nature of security as a public good, raising questions that transcend conventional privacy discourse to engage with deeper philosophical concerns about knowledge, power, and social control.

The ontological nature of algorithmic security differs fundamentally from human-mediated approaches. AI systems operate through probabilistic inference, assessing security threats by identifying statistical patterns in historical data and extrapolating these patterns to future scenarios. This methodology necessarily reifies the social conditions encoded in training data, perpetuating the structural inequalities they reflect. When predictive policing algorithms allocate police resources based on historical arrest data, they create self-fulfilling prophecies: increased surveillance in certain neighbourhoods generates more arrests, which reinforces the algorithm’s assessment of those areas as high-risk, justifying continued over-policing. This recursive relationship between prediction and reality subverts the ostensible neutrality of algorithmic decision-making, revealing how technical systems embody and amplify societal prejudices.

The delegitimisation of human expertise in favour of algorithmic authority represents a significant sociological phenomenon. As AI systems demonstrate superhuman performance in certain narrow domains – from detecting financial fraud to identifying potential security threats in airport screening – organisations increasingly defer to machine recommendations, even when these conflict with human intuition. This epistemic deference to algorithms creates what scholars term “automation bias,” where human operators uncritically accept system outputs, potentially overlooking errors or exceptional circumstances that fall outside the system’s training data. The 2018 case of Robert Williams, wrongfully arrested due to a faulty facial recognition match that police officers failed to adequately scrutinise, exemplifies this phenomenon. Such incidents illustrate how the veneer of technological objectivity can supersede critical thinking, with deleterious consequences for individuals caught in algorithmic misidentifications.

Hệ thống thuật toán AI phân tích và bảo mật dữ liệu cá nhân với các lớp mã hóa phức tạpHệ thống thuật toán AI phân tích và bảo mật dữ liệu cá nhân với các lớp mã hóa phức tạp

The procedural opacity inherent in advanced AI systems poses intractable challenges for accountability and due process. Deep learning architectures, particularly deep neural networks with billions of parameters, function as “black boxes” whose decision-making logic resists human comprehension. Even their designers cannot fully explicate why a particular input yields a specific output. This opacity becomes constitutionally problematic when AI systems inform decisions that affect fundamental rights – from criminal sentencing recommendations to asylum application assessments. Legal systems traditionally require that individuals have the right to understand and challenge adverse decisions, but this principle becomes meaningless when the decision-maker is an impenetrable algorithmic system. The emergence of “explainable AI” (XAI) as a research domain attempts to address this problem, but current XAI techniques often provide post-hoc rationalisations rather than genuine insights into algorithmic reasoning, raising questions about whether true transparency is achievable or even desirable from a security perspective.

The concentration of AI capabilities within a small number of corporate entities creates novel power asymmetries with far-reaching social implications. Companies like Google, Amazon, and Microsoft possess computational infrastructure and proprietary datasets that dwarf those available to academic researchers or government agencies. This technological oligopoly means that decisions about AI security systems – including their design principles, training data, and deployment contexts – are made by unelected corporate executives pursuing profit maximisation rather than public welfare. The privatisation of security intelligence through AI systems transfers essential governmental functions to commercial actors with minimal democratic oversight. Moreover, these companies often operate transnationally, making them difficult for individual nation-states to regulate effectively. The result is a governance vacuum where critical security infrastructure exists largely beyond democratic accountability.

Adversarial attacks on AI security systems reveal fundamental vulnerabilities that undermine confidence in algorithmic governance. Researchers have demonstrated that subtle, often imperceptible modifications to inputs can cause AI systems to make catastrophically incorrect decisions – autonomous vehicles can be tricked into misreading stop signs, facial recognition systems can be fooled by specially designed glasses, and spam filters can be circumvented through strategic text alterations. These vulnerabilities exist because AI systems lack genuine understanding; they identify superficial correlations in data rather than grasping underlying causal relationships. In security contexts, adversarial attacks could allow malicious actors to evade detection or, worse, to weaponise AI systems against their operators. The arms race between AI security systems and adversarial attack techniques introduces a new dimension of instability into security infrastructure that has no clear resolution.

The psychological and sociological effects of ubiquitous AI surveillance extend beyond privacy infringement to affect fundamental aspects of social interaction and identity formation. Sociologist Shoshana Zuboff argues that constant behavioural monitoring creates unprecedented capabilities for social control, enabling what she terms “instrumentarian power” – the ability to shape human behaviour at scale through predictive analysis and targeted intervention. Unlike traditional authoritarian systems that enforce compliance through explicit coercion, instrumentarian power operates through subtle manipulation of choice architectures, making it simultaneously more palatable and more insidious. Individuals may internalise surveillance, modifying their behaviour not because of overt threats but due to ambient awareness of constant observation – a phenomenon reminiscent of Foucault’s panopticon, but implemented through distributed algorithmic systems rather than architectural design.

The question of collective action against invasive AI security systems confronts significant collective action problems. While many individuals express concern about privacy and surveillance, few take concrete steps to protect themselves, partly because the benefits of digital services seem immediate while privacy harms appear abstract and diffuse. This creates a “privacy paradox” where stated preferences diverge from actual behaviour. Moreover, privacy operates partly as a public good – its value depends on widespread uptake of protective measures, but each individual faces incentives to free-ride on others’ privacy efforts while continuing to enjoy convenient services. Breaking this dynamic requires coordinated intervention, either through regulation that mandates privacy protections or through cultural shifts that stigmatise excessive data collection. The European Union’s GDPR represents an ambitious attempt at the former, though its effectiveness remains subject to ongoing debate, particularly regarding enforcement against well-resourced multinational corporations.

Normative frameworks for evaluating AI security systems must grapple with incommensurable values and irreducible uncertainties. Security and privacy exist in inherent tension – measures that enhance one often diminish the other – yet both are essential to human flourishing. Utilitarian approaches that attempt to aggregate costs and benefits across populations face methodological challenges in quantifying intangible goods like dignity and autonomy, and risk sacrificing individual rights for collective welfare. Deontological frameworks that prioritise absolute rights struggle with cases where rights conflict or where absolute prohibitions on surveillance might enable preventable harms. Virtue ethics offers some promise by focusing on the character and intentions of AI system designers and deployers, but provides limited practical guidance for specific policy decisions. Perhaps most promisingly, emerging “values in design” approaches attempt to embed ethical considerations directly into AI development processes, though this requires ongoing interdisciplinary collaboration between technologists, ethicists, social scientists, and affected communities – a challenging prospect given divergent professional norms and incentive structures.

Questions 27-40

Questions 27-31: Multiple Choice

Choose the correct letter, A, B, C or D.

  1. According to the passage, how do AI security systems differ fundamentally from traditional security approaches?
    A. They are more expensive to implement
    B. They rely on algorithmic logic rather than human judgement
    C. They are only used by government agencies
    D. They cannot identify security threats effectively

  2. The term “self-fulfilling prophecies” in paragraph B refers to:
    A. accurate predictions made by AI systems
    B. algorithms that improve over time through learning
    C. predictions that become true because they influence behaviour
    D. prophecies made by security professionals

  3. What problem does the author identify with “explainable AI” (XAI)?
    A. It is too expensive to implement widely
    B. It provides rationalisations rather than true explanations
    C. It reveals too much about security vulnerabilities
    D. It requires extensive training to understand

  4. According to the passage, the concentration of AI capabilities in few companies creates:
    A. better security for all users
    B. more competition in the technology sector
    C. power imbalances with limited democratic oversight
    D. opportunities for government regulation

  5. The “privacy paradox” described in the passage refers to:
    A. privacy being both important and irrelevant
    B. the difference between what people say and what they do about privacy
    C. the impossibility of achieving privacy in modern society
    D. the conflict between different definitions of privacy

Questions 32-36: Matching Features

Match each description (32-36) with the correct concept (A-H) from the box below.

Write the correct letter, A-H, next to Questions 32-36.

Concepts:

  • A. Automation bias
  • B. Adversarial attacks
  • C. Instrumentarian power
  • D. Deep neural networks
  • E. Privacy-preserving AI
  • F. Algorithmic governance
  • G. Epistemic deference
  • H. Surveillance capitalism
  1. The ability to control behaviour through prediction and targeted intervention __

  2. Techniques to trick AI systems into making wrong decisions __

  3. The tendency to accept machine outputs without critical examination __

  4. AI systems with billions of parameters that resist human understanding __

  5. An economic system where human experience becomes behavioural data __

Questions 37-40: Short-answer Questions

Answer the questions below.

Choose NO MORE THAN THREE WORDS from the passage for each answer.

  1. What type of power operates through subtle changes to choice architectures rather than explicit force?

  2. What philosophical concept by Foucault is compared to modern distributed algorithmic surveillance?

  3. According to the passage, what does the GDPR represent in terms of privacy protection efforts?

  4. What type of approach attempts to incorporate ethical considerations directly into AI development?

Cân bằng giữa bảo mật AI và quyền riêng tư cá nhân trong xã hội sốCân bằng giữa bảo mật AI và quyền riêng tư cá nhân trong xã hội số

Answer Keys – Đáp Án

PASSAGE 1: Questions 1-13

  1. B
  2. C
  3. B
  4. C
  5. B
  6. NOT GIVEN
  7. TRUE
  8. NOT GIVEN
  9. TRUE
  10. chilling effect
  11. social credit scores
  12. emotion detection
  13. regulatory patchwork

PASSAGE 2: Questions 14-26

  1. NO
  2. YES
  3. NOT GIVEN
  4. YES
  5. NO
  6. i
  7. v
  8. vi
  9. vii
  10. cognitive maturity
  11. subsidised
  12. perpetuate and amplify / amplify / perpetuate
  13. power asymmetry

PASSAGE 3: Questions 27-40

  1. B
  2. C
  3. B
  4. C
  5. B
  6. C
  7. B
  8. A
  9. D
  10. H
  11. instrumentarian power
  12. panopticon / Foucault’s panopticon
  13. ambitious attempt
  14. values in design

Giải Thích Đáp Án Chi Tiết

Passage 1 – Giải Thích

Câu 1: B

  • Dạng câu hỏi: Multiple Choice
  • Từ khóa: traditional CCTV cameras, modern AI surveillance systems, differed
  • Vị trí trong bài: Đoạn 2, dòng 2-4
  • Giải thích: Bài đọc nêu rõ “Traditional CCTV cameras, which simply recorded footage for later review, have been replaced by intelligent systems that can identify individuals, detect unusual behaviour, and even predict potential security threats.” Điều này cho thấy sự khác biệt chính là camera truyền thống chỉ ghi hình để xem lại sau, trong khi hệ thống AI hiện đại có thể phân tích theo thời gian thực.

Câu 2: C

  • Dạng câu hỏi: Multiple Choice
  • Từ khóa: San Francisco, ban, facial recognition, reason
  • Vị trí trong bài: Đoạn 3, dòng 3-6
  • Giải thích: Bài văn giải thích “The city council cited concerns about algorithmic bias – studies have shown that facial recognition systems are significantly less accurate when identifying people with darker skin tones.” Đây chính xác là lý do liên quan đến sự thiên vị trong việc nhận diện các nhóm người khác nhau.

Câu 6: NOT GIVEN

  • Dạng câu hỏi: True/False/Not Given
  • Từ khóa: London, more surveillance cameras, European city
  • Vị trí trong bài: Đoạn 2, dòng 5-6
  • Giải thích: Bài viết chỉ nói “In London alone, there are an estimated 600,000 surveillance cameras” nhưng không so sánh với các thành phố châu Âu khác, vì vậy không thể xác định được thông tin này.

Câu 7: TRUE

  • Dạng câu hỏi: True/False/Not Given
  • Từ khóa: facial recognition, more accurate, lighter skin tones
  • Vị trí trong bài: Đoạn 3, dòng 4-5
  • Giải thích: Bài đọc nói rằng các hệ thống nhận diện khuôn mặt “significantly less accurate when identifying people with darker skin tones,” điều này ngụ ý rằng chúng chính xác hơn với những người có màu da sáng hơn.

Câu 10: chilling effect

  • Dạng câu hỏi: Sentence Completion
  • Từ khóa: Privacy advocates, constant monitoring, creates
  • Vị trí trong bài: Đoạn 3, dòng 1-2
  • Giải thích: Cụm từ chính xác trong bài là “Privacy advocates warn that constant monitoring creates a ‘chilling effect’ on public behaviour.”

Passage 2 – Giải Thích

Câu 14: NO

  • Dạng câu hỏi: Yes/No/Not Given
  • Từ khóa: people, thoroughly read, terms and conditions
  • Vị trí trong bài: Đoạn B, dòng 3-4
  • Giải thích: Bài viết khẳng định “research consistently demonstrates that virtually no one reads these agreements,” điều này trái ngược hoàn toàn với câu phát biểu.

Câu 15: YES

  • Dạng câu hỏi: Yes/No/Not Given
  • Từ khóa: AI algorithms, reveal private information, not directly provided
  • Vị trí trong bài: Đoạn C, dòng 2-4
  • Giải thích: Tác giả cho ví dụ “An insurance company’s AI might deduce health conditions from social media posts and web browsing history,” cho thấy AI có thể suy luận thông tin riêng tư từ dữ liệu gián tiếp.

Câu 19: i (The challenge of understanding AI decisions)

  • Dạng câu hỏi: Matching Headings
  • Vị trí trong bài: Đoạn C
  • Giải thích: Đoạn này tập trung vào “opacity of AI algorithms” và việc các hệ thống “black box” khiến người dùng không thể hiểu được cách dữ liệu của họ được sử dụng.

Câu 20: v (Security risks from data collection)

  • Dạng câu hỏi: Matching Headings
  • Vị trí trong bài: Đoạn D
  • Giải thích: Đoạn văn thảo luận về “data breaches” và cách các database lớn trở thành mục tiêu của tội phạm mạng.

Câu 23: cognitive maturity

  • Dạng câu hỏi: Summary Completion
  • Từ khóa: Children, lack, understand privacy
  • Vị trí trong bài: Đoạn F, dòng 2-3
  • Giải thích: Bài viết nói trẻ em “lack the cognitive maturity to understand privacy implications.”

Passage 3 – Giải Thích

Câu 27: B

  • Dạng câu hỏi: Multiple Choice
  • Từ khóa: AI security systems, differ fundamentally, traditional security
  • Vị trí trong bài: Đoạn A, dòng 2-5
  • Giải thích: Bài đọc giải thích rõ ràng “Traditional security paradigms relied on human discretion and contextual judgement… Contemporary AI-driven security systems, by contrast, instantiate decision-making processes through algorithmic logic.”

Câu 28: C

  • Dạng câu hỏi: Multiple Choice
  • Từ khóa: self-fulfilling prophecies, paragraph B
  • Vị trí trong bài: Đoạn B, dòng 4-8
  • Giải thích: Bài viết mô tả cách dự đoán của thuật toán tạo ra nhiều giám sát hơn, dẫn đến nhiều vụ bắt giữ hơn, từ đó củng cố đánh giá ban đầu của thuật toán – một chu trình tự thực hiện.

Câu 32: C (Instrumentarian power)

  • Dạng câu hỏi: Matching Features
  • Từ khóa: control behaviour, prediction, targeted intervention
  • Vị trí trong bài: Đoạn G, dòng 2-4
  • Giải thích: Định nghĩa chính xác trong bài: “instrumentarian power – the ability to shape human behaviour at scale through predictive analysis and targeted intervention.”

Câu 37: instrumentarian power

  • Dạng câu hỏi: Short-answer Questions
  • Từ khóa: power, choice architectures, not explicit force
  • Vị trí trong bài: Đoạn G, dòng 5-7
  • Giải thích: Bài viết nói “instrumentarian power operates through subtle manipulation of choice architectures” thay vì sử dụng “explicit coercion.”

Câu 38: panopticon / Foucault’s panopticon

  • Dạng câu hỏi: Short-answer Questions
  • Từ khóa: Foucault, philosophical concept, distributed algorithmic surveillance
  • Vị trí trong bài: Đoạn G, dòng cuối
  • Giải thích: Bài đọc so sánh hiện tượng hiện đại với “Foucault’s panopticon, but implemented through distributed algorithmic systems.”

Từ Vựng Quan Trọng Theo Passage

Passage 1 – Essential Vocabulary

Từ vựng Loại từ Phiên âm Nghĩa tiếng Việt Ví dụ từ bài Collocation
ubiquitous adj /juːˈbɪkwɪtəs/ có mặt khắp nơi, phổ biến AI-powered surveillance has become ubiquitous in urban environments ubiquitous technology, ubiquitous presence
facial recognition noun phrase /ˈfeɪʃl ˌrekəɡˈnɪʃn/ nhận diện khuôn mặt facial recognition cameras in shopping centres facial recognition system, facial recognition technology
law enforcement noun phrase /lɔː ɪnˈfɔːsmənt/ cơ quan thực thi pháp luật Law enforcement agencies argue that these systems help law enforcement agencies, law enforcement officials
controversy n /ˈkɒntrəvɜːsi/ tranh cãi, tranh luận the expansion of AI surveillance has not been without controversy cause controversy, public controversy
chilling effect noun phrase /ˈtʃɪlɪŋ ɪˈfekt/ tác động làm nản lòng, răn đe constant monitoring creates a “chilling effect” on public behaviour have a chilling effect, create a chilling effect
algorithmic bias noun phrase /ˌælɡəˈrɪðmɪk ˈbaɪəs/ thiên kiến thuật toán concerns about algorithmic bias address algorithmic bias, algorithmic bias problem
wrongful arrest noun phrase /ˈrɒŋfl əˈrest/ bắt giữ oan sai potentially leading to wrongful arrests wrongful arrest claim, wrongful arrest lawsuit
consent n/v /kənˈsent/ sự đồng ý, chấp thuận workers have little choice but to consent give consent, informed consent
extensive adj /ɪkˈstensɪv/ rộng lớn, bao quát the world’s most extensive surveillance network extensive network, extensive research
authoritarian adj /ɔːˌθɒrɪˈteəriən/ chuyên quyền, độc đoán condemned it as authoritarian overreach authoritarian regime, authoritarian government
gait recognition noun phrase /ɡeɪt ˌrekəɡˈnɪʃn/ nhận diện dáng đi Gait recognition technology can identify individuals gait recognition system, gait recognition algorithm
regulatory patchwork noun phrase /ˈreɡjələtəri ˈpætʃwɜːk/ hệ thống quy định manh mún This regulatory patchwork creates confusion regulatory patchwork approach

Passage 2 – Essential Vocabulary

Từ vựng Loại từ Phiên âm Nghĩa tiếng Việt Ví dụ từ bài Collocation
exponential growth noun phrase /ˌekspəˈnenʃl ɡrəʊθ/ tăng trưởng theo cấp số nhân The exponential growth of artificial intelligence exponential growth rate, experience exponential growth
surveillance capitalism noun phrase /səˈveɪləns ˈkæpɪtəlɪzəm/ chủ nghĩa tư bản giám sát what scholars term “surveillance capitalism” era of surveillance capitalism, surveillance capitalism model
informed consent noun phrase /ɪnˈfɔːmd kənˈsent/ sự đồng ý sau khi được thông báo At the heart of the data privacy crisis lies the concept of informed consent give informed consent, obtain informed consent
sweeping permissions noun phrase /ˈswiːpɪŋ pəˈmɪʃnz/ quyền hạn rộng rãi grant companies sweeping permissions sweeping permissions to collect, grant sweeping permissions
Hobson’s choice noun phrase /ˈhɒbsnz tʃɔɪs/ sự lựa chọn không có lựa chọn they would face a Hobson’s choice face a Hobson’s choice
opacity n /əʊˈpæsəti/ sự mờ đục, không rõ ràng The opacity of AI algorithms opacity of algorithms, lack of opacity
black box noun phrase /blæk bɒks/ hộp đen (không rõ cách hoạt động) The inscrutability of these “black box” systems black box algorithm, black box system
data breach noun phrase /ˈdeɪtə briːtʃ/ vi phạm dữ liệu, rò rỉ dữ liệu Data breaches represent another critical dimension suffer a data breach, major data breach
cybercriminal n /ˈsaɪbəˌkrɪmɪnl/ tội phạm mạng increasingly attractive targets for cybercriminals cybercriminal activity, cybercriminal network
power asymmetry noun phrase /ˈpaʊər əˈsɪmətri/ sự bất cân xứng về quyền lực The power asymmetry between data collectors power asymmetry exists, reduce power asymmetry
microtargeting n /ˈmaɪkrəʊˌtɑːɡɪtɪŋ/ nhắm mục tiêu vi mô This imbalance enables microtargeting microtargeting techniques, political microtargeting
disproportionate adj /ˌdɪsprəˈpɔːʃənət/ không cân xứng, quá mức Vulnerable populations face disproportionate risks disproportionate impact, disproportionate effect
perpetuate v /pəˈpetʃueɪt/ làm duy trì, kéo dài AI systems can perpetuate and amplify existing biases perpetuate discrimination, perpetuate inequality
differential privacy noun phrase /ˌdɪfəˈrenʃl ˈprɪvəsi/ quyền riêng tư vi phân Techniques like differential privacy add mathematical noise differential privacy mechanism, implement differential privacy
trade-off n /ˈtreɪd ɒf/ sự đánh đổi creating trade-offs between privacy and functionality involve trade-offs, make trade-offs

Passage 3 – Essential Vocabulary

Từ vựng Loại từ Phiên âm Nghĩa tiếng Việt Ví dụ từ bài Collocation
ascendancy n /əˈsendənsi/ sự thống trị, ưu thế The ascendancy of artificial intelligence gain ascendancy, rise to ascendancy
epistemological adj /ɪˌpɪstəməˈlɒdʒɪkl/ thuộc nhận thức luận a fundamental epistemological shift epistemological question, epistemological framework
immutable adj /ɪˈmjuːtəbl/ không thay đổi được rules serving as guidelines rather than immutable directives immutable law, immutable truth
inscrutable adj /ɪnˈskruːtəbl/ khó hiểu, khó đoán simultaneously deterministic and inscrutable inscrutable logic, remain inscrutable
ontological adj /ˌɒntəˈlɒdʒɪkl/ thuộc bản thể luận The ontological nature of algorithmic security ontological question, ontological status
probabilistic inference noun phrase /ˌprɒbəbɪˈlɪstɪk ˈɪnfərəns/ suy luận xác suất AI systems operate through probabilistic inference probabilistic inference method, make probabilistic inference
reify v /ˈreɪɪfaɪ/ cụ thể hóa This methodology necessarily reifies the social conditions reify concepts, tend to reify
self-fulfilling prophecy noun phrase /self fʊlˈfɪlɪŋ ˈprɒfəsi/ lời tiên tri tự ứng nghiệm they create self-fulfilling prophecies become a self-fulfilling prophecy
recursive adj /rɪˈkɜːsɪv/ đệ quy, lặp lại This recursive relationship between prediction recursive process, recursive algorithm
ostensible adj /ɒˈstensəbl/ bề ngoài, tỏ vẻ subverts the ostensible neutrality ostensible purpose, ostensible reason
automation bias noun phrase /ˌɔːtəˈmeɪʃn ˈbaɪəs/ thiên kiến tự động hóa creates what scholars term “automation bias” suffer from automation bias, automation bias effect
uncritically adv /ʌnˈkrɪtɪkli/ một cách thiếu phê phán human operators uncritically accept system outputs accept uncritically, follow uncritically
veneer n /vəˈnɪə(r)/ lớp phủ bên ngoài the veneer of technological objectivity veneer of respectability, thin veneer
deleterious adj /ˌdeləˈtɪəriəs/ có hại with deleterious consequences for individuals deleterious effect, deleterious impact
intractable adj /ɪnˈtræktəbl/ nan giải, khó giải quyết poses intractable challenges for accountability intractable problem, prove intractable
impenetrable adj /ɪmˈpenɪtrəbl/ không thể xâm nhập when the decision-maker is an impenetrable algorithmic system impenetrable barrier, remain impenetrable
oligopoly n /ˌɒlɪˈɡɒpəli/ thị trường độc quyền nhóm This technological oligopoly means that decisions oligopoly market, oligopoly power
adversarial attack noun phrase /ˌædvəˈseəriəl əˈtæk/ tấn công đối kháng Adversarial attacks on AI security systems launch adversarial attack, adversarial attack technique
instrumentarian power noun phrase /ˌɪnstrəmenˈteəriən ˈpaʊə(r)/ quyền lực công cụ hóa enabling what she terms “instrumentarian power” exercise instrumentarian power, instrumentarian power system
panopticon n /pæˈnɒptɪkɒn/ nhà tù toàn cảnh (Foucault) reminiscent of Foucault’s panopticon digital panopticon, panopticon effect
incommensurable adj /ˌɪnkəˈmenʃərəbl/ không thể so sánh được must grapple with incommensurable values incommensurable values, incommensurable concepts

Kết Luận

Đề thi IELTS Reading mẫu về chủ đề “What are the social implications of AI in privacy and security?” mà bạn vừa hoàn thành đại diện cho một trong những topic quan trọng và thường xuyên xuất hiện trong các kỳ thi IELTS gần đây. Ba passages đã đưa bạn đi từ những khái niệm cơ bản về giám sát AI trong môi trường đô thị (Passage 1), đến những vấn đề phức tạp hơn về quyền riêng tư dữ liệu và thuật toán (Passage 2), và cuối cùng là những phân tích triết học sâu sắc về bản chất của bảo mật thuật toán và niềm tin xã hội (Passage 3).

Với tổng cộng 40 câu hỏi đa dạng bao gồm Multiple Choice, True/False/Not Given, Yes/No/Not Given, Matching Headings, Sentence Completion, Summary Completion, Matching Features và Short-answer Questions, bạn đã được luyện tập toàn diện các dạng bài trong IELTS Reading thực tế. Phần đáp án chi tiết không chỉ cung cấp câu trả lời đúng mà còn giải thích rõ ràng vị trí thông tin, cách paraphrase và lý do tại sao đáp án đó chính xác.

Bảng từ vựng theo từng passage với hơn 40 từ và cụm từ quan trọng, kèm phiên âm, nghĩa tiếng Việt, ví dụ thực tế và collocations phổ biến sẽ giúp bạn không chỉ mở rộng vốn từ vựng mà còn hiểu cách sử dụng chúng trong ngữ cảnh học thuật. Đây là những từ vựng có khả năng xuất hiện cao trong các đề thi IELTS khác về công nghệ, xã hội và bảo mật.

Để tận dụng tối đa đề thi này, hãy xem lại những câu bạn làm sai, phân tích kỹ giải thích đáp án để hiểu rõ lỗi sai của mình. Luyện tập lại các đoạn văn khó, tập trung vào việc cải thiện tốc độ đọc và kỹ năng skimming/scanning. Đừng quên ôn tập từ vựng thường xuyên và áp dụng chúng vào writing và speaking để nâng cao band điểm tổng thể. Chúc bạn thành công trong kỳ thi IELTS sắp tới!

Previous Article

IELTS Speaking: Cách Trả Lời Chủ Đề "Describe a Person Who Is Very Inspiring" - Bài Mẫu Band 6-9

Next Article

IELTS Speaking: Cách Trả Lời "Describe A Time When You Used A Foreign Language To Communicate" - Bài Mẫu Band 6-9

Write a Comment

Leave a Comment

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *

Đăng ký nhận thông tin bài mẫu

Để lại địa chỉ email của bạn, chúng tôi sẽ thông báo tới bạn khi có bài mẫu mới được biên tập và xuất bản thành công.
Chúng tôi cam kết không spam email ✨