Trong bối cảnh công nghệ AI phát triển vũ bão, vấn đề quyền riêng tư cá nhân trong môi trường số hóa đang trở thành chủ đề nóng bỏng toàn cầu. Chủ đề “How Does AI Affect Privacy In The Digital Age?” xuất hiện ngày càng thường xuyên trong các đề thi IELTS Reading gần đây, phản ánh sự quan tâm của cộng đồng quốc tế về tác động của trí tuệ nhân tạo đến đời sống cá nhân.
Bài viết này cung cấp một đề thi IELTS Reading hoàn chỉnh gồm 3 passages với độ khó tăng dần từ Easy đến Hard, bao gồm 40 câu hỏi đa dạng giống thi thật. Bạn sẽ được luyện tập với các dạng câu hỏi phổ biến như True/False/Not Given, Matching Headings, Summary Completion và nhiều dạng khác. Mỗi câu hỏi đều có đáp án chi tiết kèm giải thích cụ thể về vị trí thông tin và kỹ thuật paraphrase.
Đề thi này phù hợp cho học viên từ band 5.0 trở lên, giúp bạn làm quen với cấu trúc đề thi thực tế, nâng cao kỹ năng đọc hiểu và tích lũy vốn từ vựng học thuật quan trọng về công nghệ và quyền riêng tư. Hãy dành đủ 60 phút để hoàn thành bài test này trong điều kiện tương tự như phòng thi thật.
Hướng Dẫn Làm Bài IELTS Reading
Tổng Quan Về IELTS Reading Test
IELTS Reading Test kéo dài 60 phút với 3 passages và tổng cộng 40 câu hỏi. Mỗi câu trả lời đúng được tính 1 điểm và không bị trừ điểm khi sai. Độ dài mỗi passage dao động từ 650-1000 từ, với độ khó tăng dần.
Phân bổ thời gian khuyến nghị:
- Passage 1 (Easy): 15-17 phút (13 câu hỏi)
- Passage 2 (Medium): 18-20 phút (13 câu hỏi)
- Passage 3 (Hard): 23-25 phút (14 câu hỏi)
Bạn cần chuyển đáp án vào Answer Sheet trong thời gian 60 phút này, không có thời gian bổ sung như phần Listening.
Các Dạng Câu Hỏi Trong Đề Này
Đề thi mẫu này bao gồm 7 dạng câu hỏi phổ biến nhất:
- Multiple Choice – Chọn đáp án đúng từ các lựa chọn A, B, C, D
- True/False/Not Given – Xác định thông tin đúng, sai hay không được đề cập
- Matching Headings – Ghép tiêu đề phù hợp với từng đoạn văn
- Summary Completion – Điền từ vào chỗ trống trong đoạn tóm tắt
- Matching Features – Ghép thông tin với các đối tượng tương ứng
- Sentence Completion – Hoàn thiện câu với thông tin từ bài đọc
- Short-answer Questions – Trả lời ngắn gọn câu hỏi với giới hạn từ
IELTS Reading Practice Test
PASSAGE 1 – The Digital Footprint: How AI Tracks Our Daily Lives
Độ khó: Easy (Band 5.0-6.5)
Thời gian đề xuất: 15-17 phút
Every day, billions of people around the world use smartphones, browse the internet, and interact with various digital platforms. What many don’t realize is that each click, search, and post creates a digital footprint – a trail of data that reveals intimate details about our lives. Artificial intelligence (AI) systems are increasingly being used to collect, analyze, and interpret this data, fundamentally changing how our personal information is handled in the modern world.
The process begins innocuously enough. When you search for a product online, AI algorithms note your preferences. When you take a photo with your smartphone, facial recognition technology identifies who is in the picture. When you ask a voice assistant for the weather, your speech patterns and location are recorded. These individual pieces of information might seem harmless, but when combined and analyzed by sophisticated AI systems, they create a comprehensive profile of your habits, interests, relationships, and even your emotional state.
E-commerce platforms are among the most prolific collectors of personal data. Every item you view, add to your cart, or purchase is meticulously tracked. AI systems analyze this information to predict what you might want to buy next, often with remarkable accuracy. While this can make shopping more convenient, it also means that companies know more about your preferences than you might realize. Some retailers can even predict major life events, such as pregnancy, based solely on purchasing patterns – a capability that raises significant privacy concerns.
Social media platforms take data collection even further. AI systems analyze not just what you post, but also how long you look at certain content, which posts you engage with, and even when you’re most active online. This information helps platforms create highly personalized content feeds designed to keep you scrolling. However, this level of monitoring means that these companies possess extremely detailed knowledge about your interests, opinions, and social connections. Research has shown that AI can accurately predict personality traits, political views, and even mental health states based on social media activity alone.
The healthcare sector is another area where AI is transforming privacy norms. Wearable devices like fitness trackers and smartwatches continuously monitor vital signs such as heart rate, sleep patterns, and physical activity levels. While this data can provide valuable health insights, it also creates a permanent record of your physiological state. Insurance companies and employers have shown interest in accessing this information, potentially using it to make decisions about coverage or employment. Some privacy advocates worry that this could lead to discrimination against people with certain health conditions.
Financial institutions have embraced AI for both security and marketing purposes. AI systems can detect fraudulent transactions by identifying unusual spending patterns, which helps protect consumers. However, these same systems also build detailed financial profiles that can be used for targeted advertising or to make decisions about loan approvals and credit limits. The algorithmic decision-making process is often opaque, meaning consumers may not understand why they were denied credit or offered certain terms.
Perhaps most concerning is how different datasets can be combined to create even more detailed profiles. A practice known as data aggregation involves purchasing information from multiple sources and using AI to find correlations and insights that wouldn’t be apparent from any single dataset. For example, your shopping history, location data, and social media posts might be combined to infer information about your health, relationships, or financial situation. This practice happens largely without consumer awareness or consent.
The regulatory landscape is struggling to keep pace with these technological developments. Some regions, such as the European Union, have implemented comprehensive data protection laws like the General Data Protection Regulation (GDPR), which gives individuals more control over their personal information. However, enforcement remains challenging, and many countries lack similar protections. Furthermore, the global nature of digital services means that data collected in one jurisdiction may be processed or stored in another with different privacy standards.
Consumer education is another significant challenge. Many people don’t fully understand what data is being collected, how it’s being used, or what rights they have to control it. Privacy policies are often lengthy and written in technical language that makes them difficult for the average person to comprehend. Even when individuals are aware of privacy issues, they may feel they have no choice but to accept certain data collection practices if they want to use essential digital services.
Despite these concerns, AI-driven data collection also offers genuine benefits. It enables personalized services, improves security, and can even save lives through medical applications. The key question facing society is how to balance these benefits with the fundamental right to privacy. As AI capabilities continue to advance, finding this balance becomes increasingly urgent. Stakeholders – including governments, technology companies, and citizens – must work together to ensure that the digital age doesn’t come at the cost of our personal autonomy and privacy.
Questions 1-13
Questions 1-5: Multiple Choice
Choose the correct letter, A, B, C, or D.
1. According to the passage, what is a “digital footprint”?
A) A security measure used by websites
B) A record of someone’s online activities
C) A type of artificial intelligence
D) A method of protecting personal data
2. What can AI systems predict based on shopping patterns?
A) Only the next product someone will buy
B) Someone’s age and gender
C) Major life events like pregnancy
D) Credit card fraud
3. How do social media platforms use AI according to the passage?
A) To delete inappropriate content
B) To create personalized content feeds
C) To protect user privacy
D) To limit screen time
4. What concern do privacy advocates have about wearable health devices?
A) They are not accurate enough
B) They are too expensive for most people
C) The data might be used for discrimination
D) They don’t provide useful information
5. What is “data aggregation”?
A) Combining information from multiple sources
B) Deleting old data
C) Protecting data with encryption
D) Sharing data with government agencies
Questions 6-9: True/False/Not Given
Do the following statements agree with the information given in the passage?
Write:
- TRUE if the statement agrees with the information
- FALSE if the statement contradicts the information
- NOT GIVEN if there is no information on this
6. Voice assistants record both your speech patterns and your location.
7. All countries have implemented comprehensive data protection laws similar to GDPR.
8. Financial institutions only use AI for security purposes.
9. Most people fully understand the privacy policies they agree to.
Questions 10-13: Sentence Completion
Complete the sentences below.
Choose NO MORE THAN TWO WORDS from the passage for each answer.
10. AI systems can identify people in photographs using __ technology.
11. The process by which AI systems make credit decisions is often described as __.
12. The GDPR is an example of a __ law that gives people more control over their data.
13. To balance AI benefits with privacy, governments, companies, and citizens need to act as __.
PASSAGE 2 – The Privacy Paradox: Consumer Behavior in the Age of Surveillance
Độ khó: Medium (Band 6.0-7.5)
Thời gian đề xuất: 18-20 phút
The relationship between consumers and their personal data in the digital age presents a fascinating contradiction that researchers have termed the “privacy paradox.” While surveys consistently show that people express serious concerns about how companies collect and use their personal information, these same individuals regularly engage in behaviors that compromise their privacy. Understanding this paradox is crucial for developing effective privacy policies and technologies that protect users in an era where artificial intelligence has made data exploitation both more powerful and more subtle than ever before.
A. The Psychology of Privacy Concerns
Research into consumer attitudes reveals a complex psychological landscape. Studies conducted across multiple countries demonstrate that approximately 80% of internet users express anxiety about their online privacy, with particular concern about how AI systems use their data to make automated decisions. These concerns are not unfounded – instances of data breaches, unauthorized surveillance, and manipulative algorithmic targeting regularly make headlines. However, despite these worries, the majority of users continue to share extensive personal information with minimal hesitation.
B. The Convenience-Privacy Trade-off
One primary factor driving this paradoxical behavior is the immediate gratification that comes from digital services. When faced with the choice between protecting their privacy and accessing convenient, personalized services, most consumers opt for convenience. Free email services, social networking platforms, and navigation apps have become so integral to modern life that many people consider them essential rather than optional. The value proposition is clear: in exchange for personal data, users receive services that would otherwise cost money or simply not exist. This trade-off is not always explicit, but it forms the underlying economic model of much of the internet.
C. Information Asymmetry and Cognitive Limitations
Another key element of the privacy paradox relates to the asymmetric information between service providers and users. Companies deploying AI systems have sophisticated understanding of what data they collect, how algorithms process it, and what insights can be derived. Conversely, average users typically have limited comprehension of these processes. Privacy policies and terms of service are deliberately written in dense legal language, with some documents exceeding 10,000 words. Research shows that reading all the privacy policies one encounters in a year would require approximately 244 hours – clearly an unrealistic expectation. This information gap means users cannot make truly informed decisions about their privacy, even when they wish to do so.
D. The Illusion of Control
Technology companies have become adept at creating an illusion that users maintain control over their data. Privacy settings dashboards, data download options, and consent mechanisms give the appearance of user autonomy. However, critics argue these features are often examples of “privacy theater” – superficial gestures that don’t address fundamental power imbalances. The sheer complexity of modern data ecosystems means that even users who meticulously adjust their settings cannot fully prevent their information from being collected, shared, and analyzed. Furthermore, AI systems can infer information that users never explicitly provided, rendering some privacy controls essentially meaningless.
E. Social Pressure and Network Effects
The privacy paradox is also perpetuated by network effects and social pressure. Many digital platforms become more valuable as more people use them, creating powerful incentives for individuals to join even if they have privacy concerns. Young people, in particular, may feel socially excluded if they don’t participate in popular social media platforms. This dynamic creates a situation where individual privacy preferences are overridden by collective behavior patterns. Even privacy-conscious individuals may feel compelled to use services that collect extensive data because opting out would mean losing touch with their social circles or missing important information.
F. Temporal Discounting and Future Consequences
Behavioral economics offers another explanation for the privacy paradox through the concept of temporal discounting – the tendency to value immediate rewards more highly than future costs. When users sign up for a new service, they immediately experience benefits like entertainment, social connection, or productivity tools. The potential negative consequences of data collection – such as identity theft, discrimination, or manipulative targeting – are uncertain and may occur far in the future. Psychological research demonstrates that humans are generally poor at accurately weighing such temporally distant risks, particularly when they involve abstract or probabilistic outcomes.
G. The Role of AI in Deepening the Paradox
Artificial intelligence has exacerbated the privacy paradox in several ways. First, AI enables the extraction of far more insights from data than was previously possible, meaning that seemingly innocuous information can be used to deduce sensitive characteristics. Second, AI-powered personalization creates highly engaging user experiences that make digital services even more difficult to resist. Third, the opaque nature of many AI systems means that users cannot easily understand or predict how their data is being used. Machine learning models can identify patterns and make inferences that would be impossible for humans to anticipate, further widening the information asymmetry between companies and consumers.
H. Paths Forward: Bridging the Paradox
Addressing the privacy paradox requires multifaceted approaches. Regulatory interventions like the GDPR attempt to rebalance power by giving users more rights and requiring companies to be more transparent. Privacy-enhancing technologies such as differential privacy and federated learning promise to enable useful AI applications while protecting individual data. Some experts advocate for shifting the economic model away from surveillance capitalism entirely, perhaps through subscription-based services or publicly funded alternatives. Others focus on improving privacy literacy so consumers can make better-informed decisions.
However, each approach faces significant challenges. Regulations can be circumvented or may stifle innovation. Technical solutions may be too complex for widespread adoption. Alternative business models may not be economically viable or attractive to consumers accustomed to “free” services. Ultimately, resolving the privacy paradox may require nothing less than a fundamental reconfiguration of the relationship between individuals, technology companies, and data in the digital age – a transformation that will require cooperation among technologists, policymakers, businesses, and civil society.
Questions 14-26
Questions 14-18: Matching Headings
The passage has nine paragraphs labeled A-H.
Choose the correct heading for paragraphs A-E from the list of headings below.
Write the correct number i-x.
List of Headings:
i. The difficulty of avoiding popular platforms
ii. Technical solutions to privacy problems
iii. Why people worry about but don’t protect their privacy
iv. How companies make users feel they have control
v. The problem of understanding complex privacy terms
vi. The benefits users receive for sharing data
vii. Surveys showing privacy concerns
viii. Future risks versus present benefits
ix. How AI makes the problem worse
x. Legal approaches to protecting privacy
14. Paragraph A
15. Paragraph B
16. Paragraph C
17. Paragraph D
18. Paragraph E
Questions 19-23: Yes/No/Not Given
Do the following statements agree with the views of the writer in the passage?
Write:
- YES if the statement agrees with the views of the writer
- NO if the statement contradicts the views of the writer
- NOT GIVEN if it is impossible to say what the writer thinks about this
19. The privacy paradox shows that consumers are irrational in their decision-making.
20. Privacy settings provided by technology companies effectively protect user data.
21. Reading all privacy policies you encounter would take more than 200 hours per year.
22. Young people are more privacy-conscious than older generations.
23. Artificial intelligence makes it easier for users to understand how their data is used.
Questions 24-26: Summary Completion
Complete the summary below.
Choose NO MORE THAN TWO WORDS from the passage for each answer.
The privacy paradox occurs partly because of (24) __, meaning companies know much more about data processes than users do. Even when users try to protect themselves, AI can (25) __ information that wasn’t explicitly shared. Resolving this issue requires a fundamental (26) __ of how individuals, companies, and data relate to each other.
PASSAGE 3 – Algorithmic Governance and the Future of Privacy Rights
Độ khó: Hard (Band 7.0-9.0)
Thời gian đề xuất: 23-25 phút
The proliferation of artificial intelligence systems in contemporary society has precipitated a fundamental shift in the nature of privacy itself. Traditional conceptions of privacy, rooted in liberal democratic theory and premised on individual control over discrete pieces of information, are increasingly inadequate for addressing the challenges posed by sophisticated AI technologies. These systems don’t merely collect data; they generate new knowledge through inferential analytics, create predictive models of behavior, and exercise algorithmic governance over various aspects of social, economic, and political life. This transformation necessitates a reconceptualization of privacy rights that accounts for the systemic and relational dimensions of data processing in the age of AI.
The theoretical framework that has dominated privacy discourse since the late 19th century emphasizes privacy as an individual right to be “let alone,” with subsequent jurisprudence expanding this to include informational self-determination – the notion that individuals should control information about themselves. However, contemporary AI systems undermine this framework in multiple ways. First, the granularity and ubiquity of data collection mean that maintaining anonymity or limiting disclosure has become practically impossible in digitally mediated environments. Second, AI’s capacity for inferential analytics means that sensitive information can be deduced even when not directly provided, rendering consent mechanisms largely ineffectual. Third, the collective nature of many AI systems means that individual privacy preferences cannot be disentangled from broader patterns of data aggregation and analysis.
Consider the epistemological implications of machine learning algorithms that identify correlations imperceptible to human cognition. When an AI system determines that people who purchase certain combinations of products are likely to have specific health conditions, or that individuals with particular social media behavior patterns are probable candidates for loan defaults, it creates knowledge that exists solely within algorithmic processes. This algorithmically generated knowledge raises profound questions about the locus of privacy protection. Should privacy rights extend to probabilistic inferences about individuals, even when those inferences are based on aggregated patterns rather than personal information? If such inferences influence consequential decisions about employment, creditworthiness, or insurance eligibility, the answer seems clearly affirmative, yet existing legal frameworks provide minimal protection against this form of inferential privacy violation.
The opacity of contemporary AI systems – what scholars term the “black box problem” – further complicates privacy protection. Many advanced machine learning models, particularly deep neural networks, operate through processes that are not readily interpretable even by their creators. When such systems make decisions affecting individuals, those individuals often cannot ascertain what information influenced the decision or how it was weighted. This epistemological opacity is not merely a technical limitation but a fundamental characteristic of certain AI architectures. The inability to interrogate algorithmic decision-making processes means that individuals cannot effectively exercise rights like data rectification or algorithmic contestation, even when such rights exist nominally in law.
Furthermore, AI systems increasingly operate through what can be termed “continuous authentication” or “ambient surveillance.” Rather than discrete instances of data collection requiring explicit consent, modern digital environments involve perpetual monitoring that generates behavioral biometrics – unique patterns in how individuals type, swipe, move, speak, and interact with devices. These behavioral signatures can identify individuals with high accuracy, effectively eliminating anonymity even in ostensibly public digital spaces. The granular temporal resolution of such monitoring means that AI systems can detect subtle changes in behavior that may reveal information about emotional states, health conditions, or intentions. This form of surveillance operates below the threshold of traditional privacy protections, which were designed for discrete, identifiable data points rather than continuous streams of ambient information.
The geopolitical dimension of AI-driven privacy concerns deserves particular attention. Different jurisdictions have adopted markedly divergent approaches to regulating AI and protecting privacy. The European Union’s regulatory framework, exemplified by the GDPR and proposed AI Act, emphasizes individual rights, algorithmic transparency, and precautionary governance. In contrast, the United States has pursued a more sectoral and market-oriented approach, with limited comprehensive federal regulation. China has developed a model that combines strict data localization requirements with extensive state access to personal data for social governance purposes. These divergent regulatory paradigms create challenges for multinational technology platforms and raise questions about jurisdictional sovereignty in the digital realm.
Emerging technologies promise to further transform the privacy landscape. Federated learning enables AI models to be trained on distributed datasets without centralizing raw data, potentially preserving privacy while enabling useful applications. Differential privacy introduces mathematical guarantees that individual records cannot be reconstructed from aggregated data analyses. Homomorphic encryption allows computations on encrypted data without decryption, enabling secure AI processing. However, each of these privacy-enhancing technologies involves trade-offs in terms of accuracy, computational cost, or usability. Moreover, their effective deployment requires technical expertise and infrastructure that may be inaccessible to smaller organizations or individuals in resource-constrained settings.
The commodification of personal data through what Shoshana Zuboff terms “surveillance capitalism” represents perhaps the most significant structural challenge to privacy in the AI age. The business models of dominant digital platforms depend fundamentally on extracting behavioral data, using AI to analyze and predict behavior, and monetizing those predictions through targeted advertising or behavioral modification. This economic structure creates powerful institutional incentives for maximizing data collection and analytical capability, with privacy protection viewed as a constraint rather than an objective. Some scholars argue that meaningful privacy protection requires not merely regulatory intervention but transformation of these underlying economic structures – perhaps through data cooperatives, algorithmic public utilities, or other alternative models that realign incentives.
Normative questions about the appropriate balance between AI capabilities and privacy protections resist easy resolution. AI systems promise substantial social benefits: improved medical diagnoses, more efficient resource allocation, enhanced security, and scientific breakthroughs. Many of these benefits depend on access to large datasets and sophisticated analytical capabilities. Overly restrictive privacy regulations might impede valuable innovations, while insufficient protections may enable corporate or state surveillance that undermines democratic governance and individual autonomy. Different communities and cultures may legitimately reach different conclusions about where to strike this balance, suggesting that uniform global standards may be neither achievable nor desirable.
Looking forward, several scholarly trajectories appear particularly salient. First, developing legal and technical frameworks for collective data governance that recognize data’s relational and contextual nature. Second, creating mechanisms for algorithmic accountability that function even with opaque AI systems, perhaps through auditing regimes or regulatory sandboxes. Third, exploring alternative economic models that don’t rely on extensive behavioral data extraction. Fourth, cultivating critical data literacy throughout society so individuals can better navigate privacy-relevant decisions. Finally, maintaining interdisciplinary dialogue among computer scientists, legal scholars, ethicists, and policymakers to ensure technical developments and regulatory frameworks evolve in tandem. The future of privacy in the age of AI will be determined not by technology alone, but by the social, legal, and economic structures we construct around it.
Minh họa về AI giám sát quyền riêng tư người dùng trong môi trường số hiện đại với các biểu tượng dữ liệu
Questions 27-40
Questions 27-31: Multiple Choice
Choose the correct letter, A, B, C, or D.
27. According to the passage, traditional privacy frameworks are inadequate because:
A) They were designed before the internet existed
B) They focus on individual control over specific information
C) They don’t apply to artificial intelligence
D) They are only relevant in democratic countries
28. What is the main problem with “algorithmically generated knowledge”?
A) It is always inaccurate
B) It violates existing privacy laws
C) It creates inferences about people from aggregated patterns
D) It requires too much computing power
29. The “black box problem” refers to:
A) Storage devices used by AI companies
B) The inability to understand how some AI systems make decisions
C) Security measures that protect AI algorithms
D) A type of data encryption
30. “Continuous authentication” is concerning because:
A) It requires constant user input
B) It eliminates the need for passwords
C) It enables perpetual monitoring through behavioral patterns
D) It is too expensive to implement
31. According to the passage, different countries have approached AI regulation:
A) In identical ways based on international treaties
B) Only through market-based solutions
C) Through markedly different regulatory paradigms
D) By prohibiting all AI systems
Questions 32-36: Matching Features
Match each technology (Questions 32-36) with the correct characteristic (A-H).
Write the correct letter A-H.
Characteristics:
A) Introduces mathematical guarantees for privacy
B) Enables training AI without centralizing data
C) Allows computation without decryption
D) Requires expensive infrastructure
E) Eliminates all privacy concerns
F) Works only with simple algorithms
G) Has been banned in most countries
H) Provides perfect accuracy without trade-offs
32. Federated learning
33. Differential privacy
34. Homomorphic encryption
35. Privacy-enhancing technologies (general)
36. Data cooperatives
Questions 37-40: Short-answer Questions
Answer the questions below.
Choose NO MORE THAN THREE WORDS from the passage for each answer.
37. What term describes the business model that depends on extracting and monetizing behavioral data?
38. What type of information can AI systems deduce even when not directly provided by users?
39. What aspect of data do scholars suggest should be recognized in governance frameworks?
40. What kind of dialogue must be maintained to ensure technology and regulation develop together?
Answer Keys – Đáp Án
PASSAGE 1: Questions 1-13
- B
- C
- B
- C
- A
- TRUE
- FALSE
- FALSE
- FALSE
- facial recognition
- opaque
- data protection
- stakeholders
PASSAGE 2: Questions 14-26
- vii
- vi
- v
- iv
- i
- NOT GIVEN
- NO
- YES
- NOT GIVEN
- NO
- information asymmetry / asymmetric information
- infer
- reconfiguration
PASSAGE 3: Questions 27-40
- B
- C
- B
- C
- C
- B
- A
- C
- D
- NOT GIVEN (Note: Data cooperatives không được mô tả với đặc điểm cụ thể trong danh sách, đây là distractor)
- surveillance capitalism
- sensitive information
- relational (nature) / contextual (nature)
- interdisciplinary dialogue
Giải Thích Đáp Án Chi Tiết
Passage 1 – Giải Thích
Câu 1: B
- Dạng câu hỏi: Multiple Choice
- Từ khóa: digital footprint
- Vị trí trong bài: Đoạn 1, câu 2-3
- Giải thích: Bài đọc định nghĩa rõ ràng “digital footprint” là “a trail of data that reveals intimate details about our lives” (một chuỗi dữ liệu tiết lộ chi tiết riêng tư về cuộc sống của chúng ta). Đáp án B “A record of someone’s online activities” (Hồ sơ về hoạt động trực tuyến của ai đó) là paraphrase chính xác của định nghĩa này.
Câu 2: C
- Dạng câu hỏi: Multiple Choice
- Từ khóa: AI systems predict, shopping patterns
- Vị trí trong bài: Đoạn 3, câu cuối
- Giải thích: Đoạn văn nói rõ “Some retailers can even predict major life events, such as pregnancy, based solely on purchasing patterns”. Đây là paraphrase trực tiếp của đáp án C.
Câu 3: B
- Dạng câu hỏi: Multiple Choice
- Từ khóa: social media platforms, AI
- Vị trí trong bài: Đoạn 4, câu 3
- Giải thích: “This information helps platforms create highly personalized content feeds designed to keep you scrolling” khớp với đáp án B.
Câu 6: TRUE
- Dạng câu hỏi: True/False/Not Given
- Từ khóa: voice assistants, speech patterns, location
- Vị trí trong bài: Đoạn 2, câu 3
- Giải thích: “When you ask a voice assistant for the weather, your speech patterns and location are recorded” – khớp hoàn toàn với câu hỏi.
Câu 7: FALSE
- Dạng câu hỏi: True/False/Not Given
- Từ khóa: all countries, data protection laws, GDPR
- Vị trí trong bài: Đoạn 8, câu 2-3
- Giải thích: Bài đọc nói rõ “Some regions, such as the European Union” và “many countries lack similar protections”, cho thấy không phải tất cả quốc gia đều có luật tương tự GDPR.
Câu 10: facial recognition
- Dạng câu hỏi: Sentence Completion
- Vị trí trong bài: Đoạn 2, câu 2
- Giải thích: “facial recognition technology identifies who is in the picture” – điền vào chỗ trống là “facial recognition”.
Câu 13: stakeholders
- Dạng câu hỏi: Sentence Completion
- Vị trí trong bài: Đoạn 10, câu cuối
- Giải thích: “Stakeholders – including governments, technology companies, and citizens – must work together” – từ cần điền là “stakeholders”.
Cấu trúc đề thi IELTS Reading ba passages về trí tuệ nhân tạo và quyền riêng tư cá nhân
Passage 2 – Giải Thích
Câu 14: vii (Paragraph A)
- Dạng câu hỏi: Matching Headings
- Giải thích: Đoạn A tập trung vào “Research into consumer attitudes” và việc “80% of internet users express anxiety about their online privacy” – tương ứng với heading vii “Surveys showing privacy concerns”.
Câu 15: vi (Paragraph B)
- Dạng câu hỏi: Matching Headings
- Giải thích: Đoạn B thảo luận về “convenience-privacy trade-off” và “in exchange for personal data, users receive services” – khớp với heading vi “The benefits users receive for sharing data”.
Câu 16: v (Paragraph C)
- Dạng câu hỏi: Matching Headings
- Giải thích: Đoạn C đề cập đến “Privacy policies and terms of service are deliberately written in dense legal language” và “reading all the privacy policies… would require approximately 244 hours” – tương ứng với heading v.
Câu 20: NO
- Dạng câu hỏi: Yes/No/Not Given
- Vị trí: Đoạn D
- Giải thích: Tác giả rõ ràng phản đối quan điểm này khi nói về “privacy theater” và “superficial gestures that don’t address fundamental power imbalances”. Việc tác giả gọi đó là “illusion of control” cho thấy quan điểm phản đối.
Câu 21: YES
- Dạng câu hỏi: Yes/No/Not Given
- Vị trí: Đoạn C, câu 5
- Giải thích: “Research shows that reading all the privacy policies one encounters in a year would require approximately 244 hours” – khớp với câu hỏi “more than 200 hours”.
Câu 24: information asymmetry / asymmetric information
- Dạng câu hỏi: Summary Completion
- Vị trí: Đoạn C, câu đầu
- Giải thích: “Another key element of the privacy paradox relates to the asymmetric information between service providers and users” – cụm từ cần điền.
Câu 25: infer
- Dạng câu hỏi: Summary Completion
- Vị trí: Đoạn D
- Giải thích: “AI systems can infer information that users never explicitly provided” – động từ “infer” phù hợp với ngữ cảnh.
Passage 3 – Giải Thích
Câu 27: B
- Dạng câu hỏi: Multiple Choice
- Vị trí: Đoạn 1
- Giải thích: “Traditional conceptions of privacy… premised on individual control over discrete pieces of information, are increasingly inadequate” – đáp án B paraphrase chính xác ý này.
Câu 28: C
- Dạng câu hỏi: Multiple Choice
- Vị trí: Đoạn 3
- Giải thích: “This algorithmically generated knowledge raises profound questions… Should privacy rights extend to probabilistic inferences about individuals, even when those inferences are based on aggregated patterns” – khớp với đáp án C.
Câu 29: B
- Dạng câu hỏi: Multiple Choice
- Vị trí: Đoạn 4, câu đầu
- Giải thích: “The opacity of contemporary AI systems – what scholars term the ‘black box problem'” được giải thích là “processes that are not readily interpretable” – tương ứng với đáp án B.
Câu 30: C
- Dạng câu hỏi: Multiple Choice
- Vị trí: Đoạn 5
- Giải thích: “continuous authentication” được mô tả là “perpetual monitoring that generates behavioral biometrics” – khớp với đáp án C về giám sát liên tục thông qua các mẫu hành vi.
Câu 32: B (Federated learning)
- Dạng câu hỏi: Matching Features
- Vị trí: Đoạn 7, câu 2
- Giải thích: “Federated learning enables AI models to be trained on distributed datasets without centralizing raw data” – khớp với đặc điểm B.
Câu 33: A (Differential privacy)
- Dạng câu hỏi: Matching Features
- Vị trí: Đoạn 7, câu 3
- Giải thích: “Differential privacy introduces mathematical guarantees that individual records cannot be reconstructed” – tương ứng với đặc điểm A.
Câu 37: surveillance capitalism
- Dạng câu hỏi: Short-answer
- Vị trí: Đoạn 8, câu đầu
- Giải thích: “what Shoshana Zuboff terms ‘surveillance capitalism'” – cụm từ cần tìm được nêu rõ ràng.
Câu 39: relational / contextual
- Dạng câu hỏi: Short-answer
- Vị trí: Đoạn 10, câu 2
- Giải thích: “developing legal and technical frameworks for collective data governance that recognize data’s relational and contextual nature” – có thể chọn một trong hai từ “relational” hoặc “contextual”.
Câu 40: interdisciplinary dialogue
- Dạng câu hỏi: Short-answer
- Vị trí: Đoạn 10, câu cuối
- Giải thích: “maintaining interdisciplinary dialogue among computer scientists, legal scholars, ethicists, and policymakers” – cụm từ chính xác.
Từ Vựng Quan Trọng Theo Passage
Passage 1 – Essential Vocabulary
| Từ vựng | Loại từ | Phiên âm | Nghĩa tiếng Việt | Ví dụ từ bài | Collocation |
|---|---|---|---|---|---|
| digital footprint | n | /ˈdɪdʒɪtl ˈfʊtprɪnt/ | dấu vết số, hồ sơ hoạt động trực tuyến | each click creates a digital footprint | leave a digital footprint |
| algorithm | n | /ˈælɡərɪðəm/ | thuật toán | AI algorithms note your preferences | complex algorithm, sophisticated algorithm |
| facial recognition | n | /ˈfeɪʃl ˌrekəɡˈnɪʃn/ | nhận diện khuôn mặt | facial recognition technology identifies people | facial recognition system/software |
| comprehensive profile | n | /ˌkɒmprɪˈhensɪv ˈprəʊfaɪl/ | hồ sơ toàn diện | create a comprehensive profile of your habits | build/develop a comprehensive profile |
| purchasing patterns | n | /ˈpɜːtʃəsɪŋ ˈpætənz/ | mô hình mua sắm | predict pregnancy based on purchasing patterns | analyze purchasing patterns |
| privacy concerns | n | /ˈprɪvəsi kənˈsɜːnz/ | mối quan ngại về quyền riêng tư | raises significant privacy concerns | address/raise privacy concerns |
| wearable devices | n | /ˈweərəbl dɪˈvaɪsɪz/ | thiết bị đeo được | wearable devices like fitness trackers | smart wearable devices |
| vital signs | n | /ˈvaɪtl saɪnz/ | dấu hiệu sinh tồn | monitor vital signs such as heart rate | check/monitor vital signs |
| fraudulent transactions | n | /ˈfrɔːdjələnt trænˈzækʃnz/ | giao dịch gian lận | detect fraudulent transactions | prevent fraudulent transactions |
| data aggregation | n | /ˈdeɪtə ˌæɡrɪˈɡeɪʃn/ | tập hợp dữ liệu | a practice known as data aggregation | data aggregation technique |
| consumer awareness | n | /kənˈsjuːmə əˈweənəs/ | nhận thức của người tiêu dùng | happens without consumer awareness | increase/raise consumer awareness |
| regulatory landscape | n | /ˈreɡjələtəri ˈlændskeɪp/ | bối cảnh quy định pháp lý | the regulatory landscape is struggling | evolving regulatory landscape |
Passage 2 – Essential Vocabulary
| Từ vựng | Loại từ | Phiên âm | Nghĩa tiếng Việt | Ví dụ từ bài | Collocation |
|---|---|---|---|---|---|
| privacy paradox | n | /ˈprɪvəsi ˈpærədɒks/ | nghịch lý quyền riêng tư | researchers have termed the privacy paradox | illustrate/explain the privacy paradox |
| compromise | v | /ˈkɒmprəmaɪz/ | thỏa hiệp, làm tổn hại | behaviors that compromise their privacy | compromise security/privacy |
| data exploitation | n | /ˈdeɪtə ˌeksplɔɪˈteɪʃn/ | khai thác dữ liệu | made data exploitation more powerful | prevent data exploitation |
| immediate gratification | n | /ɪˈmiːdiət ˌɡrætɪfɪˈkeɪʃn/ | sự thỏa mãn tức thì | immediate gratification from digital services | seek immediate gratification |
| asymmetric information | n | /ˌæsɪˈmetrɪk ˌɪnfəˈmeɪʃn/ | thông tin bất đối xứng | relates to the asymmetric information | problem of asymmetric information |
| dense legal language | n | /dens ˈliːɡl ˈlæŋɡwɪdʒ/ | ngôn ngữ pháp lý khó hiểu | written in dense legal language | complex/dense legal language |
| illusion of control | n | /ɪˈluːʒn əv kənˈtrəʊl/ | ảo giác về sự kiểm soát | creating an illusion of control | maintain an illusion of control |
| privacy theater | n | /ˈprɪvəsi ˈθɪətə/ | sự giả vờ bảo vệ quyền riêng tư | examples of privacy theater | engage in privacy theater |
| network effects | n | /ˈnetwɜːk ɪˈfekts/ | hiệu ứng mạng lưới | perpetuated by network effects | leverage/exploit network effects |
| temporal discounting | n | /ˈtempərəl dɪsˈkaʊntɪŋ/ | chiết khấu thời gian | concept of temporal discounting | demonstrate temporal discounting |
| manipulative targeting | n | /məˈnɪpjələtɪv ˈtɑːɡɪtɪŋ/ | nhắm mục tiêu thao túng | concerns about manipulative targeting | prevent manipulative targeting |
| surveillance capitalism | n | /səˈveɪləns ˈkæpɪtəlɪzəm/ | chủ nghĩa tư bản giám sát | shifting away from surveillance capitalism | critique of surveillance capitalism |
| privacy-enhancing technologies | n | /ˈprɪvəsi ɪnˈhɑːnsɪŋ tekˈnɒlədʒiz/ | công nghệ tăng cường quyền riêng tư | privacy-enhancing technologies promise | develop privacy-enhancing technologies |
| differential privacy | n | /ˌdɪfəˈrenʃl ˈprɪvəsi/ | quyền riêng tư vi phân | differential privacy introduces guarantees | implement differential privacy |
| reconfiguration | n | /ˌriːkənˌfɪɡjəˈreɪʃn/ | sự cấu hình lại | fundamental reconfiguration of relationships | require a reconfiguration |
Passage 3 – Essential Vocabulary
| Từ vựng | Loại từ | Phiên âm | Nghĩa tiếng Việt | Ví dụ từ bài | Collocation |
|---|---|---|---|---|---|
| proliferation | n | /prəˌlɪfəˈreɪʃn/ | sự gia tăng nhanh chóng | proliferation of AI systems | rapid proliferation |
| inferential analytics | n | /ˌɪnfəˈrenʃl ˌænəˈlɪtɪks/ | phân tích suy luận | generate knowledge through inferential analytics | sophisticated inferential analytics |
| algorithmic governance | n | /ˌælɡəˈrɪðmɪk ˈɡʌvənəns/ | quản trị thuật toán | exercise algorithmic governance | systems of algorithmic governance |
| reconceptualization | n | /ˌriːkənˌseptʃuəlaɪˈzeɪʃn/ | sự tái khái niệm hóa | necessitates a reconceptualization | require reconceptualization |
| jurisprudence | n | /ˌdʒʊərɪsˈpruːdns/ | luật học, án lệ | subsequent jurisprudence expanding | legal jurisprudence |
| informational self-determination | n | /ˌɪnfəˈmeɪʃənl ˌself dɪˌtɜːmɪˈneɪʃn/ | tự quyết định thông tin | includes informational self-determination | right to informational self-determination |
| epistemological | adj | /ɪˌpɪstəməˈlɒdʒɪkl/ | thuộc về nhận thức luận | epistemological implications | epistemological questions/concerns |
| opacity | n | /əʊˈpæsəti/ | tính mờ đục, không minh bạch | opacity of contemporary AI systems | algorithmic opacity |
| black box problem | n | /blæk bɒks ˈprɒbləm/ | vấn đề hộp đen | scholars term the black box problem | address the black box problem |
| continuous authentication | n | /kənˈtɪnjuəs ɔːˌθentɪˈkeɪʃn/ | xác thực liên tục | operate through continuous authentication | implement continuous authentication |
| ambient surveillance | n | /ˈæmbiənt səˈveɪləns/ | giám sát xung quanh | ambient surveillance of digital environments | pervasive ambient surveillance |
| behavioral biometrics | n | /bɪˈheɪvjərəl ˌbaɪəʊˈmetrɪks/ | sinh trắc học hành vi | generates behavioral biometrics | analyze behavioral biometrics |
| geopolitical dimension | n | /ˌdʒiːəʊpəˈlɪtɪkl daɪˈmenʃn/ | chiều kích địa chính trị | geopolitical dimension deserves attention | consider the geopolitical dimension |
| precautionary governance | n | /prɪˈkɔːʃənəri ˈɡʌvənəns/ | quản trị phòng ngừa | emphasizes precautionary governance | adopt precautionary governance |
| federated learning | n | /ˈfedəreɪtɪd ˈlɜːnɪŋ/ | học liên kết | federated learning enables training | implement federated learning |
| homomorphic encryption | n | /ˌhɒməˈmɔːfɪk ɪnˈkrɪpʃn/ | mã hóa đồng cấu | homomorphic encryption allows computations | use homomorphic encryption |
| commodification | n | /kəˌmɒdɪfɪˈkeɪʃn/ | sự hàng hóa hóa | commodification of personal data | prevent the commodification |
| normative questions | n | /ˈnɔːmətɪv ˈkwestʃənz/ | câu hỏi chuẩn mực | normative questions about balance | address normative questions |
Kết bài
Chủ đề “How does AI affect privacy in the digital age?” không chỉ phản ánh xu hướng công nghệ hiện đại mà còn là một trong những chủ đề thường xuyên xuất hiện trong các kỳ thi IELTS Reading gần đây. Qua bộ đề thi mẫu này, bạn đã được trải nghiệm đầy đủ ba mức độ khó với tổng cộng 40 câu hỏi thuộc 7 dạng khác nhau.
Passage 1 giúp bạn làm quen với các khái niệm cơ bản về AI và quyền riêng tư thông qua ngôn ngữ dễ hiểu. Passage 2 đi sâu phân tích nghịch lý quyền riêng tư với cấu trúc câu phức tạp hơn và yêu cầu khả năng suy luận cao hơn. Passage 3 thử thách bạn với nội dung học thuật chuyên sâu, từ vựng tinh vi và các dạng câu hỏi đòi hỏi phân tích tổng hợp thông tin.
Đáp án chi tiết kèm giải thích đã chỉ ra cách xác định từ khóa, tìm vị trí thông tin trong bài và nhận biết paraphrase – những kỹ năng then chốt để đạt band điểm cao trong IELTS Reading. Phần từ vựng được tổ chức theo bảng giúp bạn dễ dàng ôn tập và ghi nhớ các collocations quan trọng.
Hãy sử dụng đề thi này như một công cụ đánh giá năng lực hiện tại và xác định những điểm cần cải thiện. Thực hành thường xuyên với các đề thi tương tự sẽ giúp bạn xây dựng sự tự tin và kỹ năng cần thiết để chinh phục IELTS Reading. Chúc bạn học tập hiệu quả và đạt được band điểm mục tiêu!