TPACK – UDL Scale for Science Teachers: Construction, Validation and Reliability

The Technological Pedagogical Content Knowledge (TPACK) has been widely studied, but only a few have been found that integrate it with the Universal Design for Learning (UDL) framework. This study aims to develop the TPACK-UDL Scale to examine teacher’s knowledge of pedagogic, content, and technology related to inclusive science learning. The development of the scale refers to the eight development steps by DeVellis. After reviewing the constructs of TPACK and UDL, and examining for intersections between them, the next step was preparing the initial draft, which produced 60 items, which were divided into eight aspects (Pedagogical Knowledge/PK, Technological Knowledge/TK, Content Knowledge/CK, Technological Content Knowledge/TCK, Pedagogical Content Knowledge/PCK, Technological Pedagogical Knowledge/TPK, TPACK, and Inclusive Education Knowledge/IEK). Six-panel experts examined the initial draft of the TPACK-UDL Scale, focusing on the content and items construction. The panel experts’ feedback was used to revise the initial draft and produced the second draft. This draft was then tested on 42 science teachers who were randomly selected and asked for their willingness to fill out the scale voluntarily. Validity and reliability of the scale were tested using the Partial Least Square (PLS) method. Several iterative stages of testing were conducted and produced a final 48 valid items with a Cronbach ’ s Alpha value > 0.8, 𝝆 A value > 0.8, and composite reliability > 0.9. Therefore, the TPACK-UDL Scale is valid and reliable for measuring teachers ’ knowledge and abilities in designing inclusive science learning. Suggestions are made for the use of the TPACK-UDL scale from a practical and theoretical perspective for future research.

Previous studies state that ICT knowledge is essential for teachers to integrate it into their classroom within their conceptual framework of teaching knowledge (Qasem & Viswanathappa, 2016). Teachers need representational skills to integrate ICT in their way to present content and pedagogical approach that constructively utilize technology to deliver content based on the intersection of technology, pedagogy, and content knowledge (Mishra & Koehler, 2006). Studies show positive trends in improving teachers' skills in incorporating ICT into instructional practice and discuss how teachers employ "Technological Pedagogical and Content Knowledge (TPACK)" to help them deal with the learning problems posed by quickly evolving technologies (Alazzam, 2012;Allan et al., 2010;Baran et al., 2011;Chai et al., 2013;Jimoyiannis, 2010;Kazua & Demirkol, 2014;Lee, 2010;Mishra & Koehler, 2006). Many potential gaps, however, are found in the Indonesian context that TPACK enables to be used in order to improve pre-and in-service teachers' knowledge and competencies, including integrating TPACK with the Universal Design for Learning (UDL).
The UDL framework was first initiated in 1984 by the Center for Applied Special Technology (CAST) to help teachers create inclusive lessons for diverse students (CAST, 2015), focusing on eliminating barriers to learning and accommodating individual differences of students. UDL contains a set of curriculum (goals, contents, methods, and assessments) development that: a) provide equal opportunities for all individuals to learn; and b) are flexible, customizable, and adjustable for individual needs (CAST, 2015), including for students with disabilities (Rao et al., 2014;Rao et al., 2021). UDL has three main tenets, namely "multiple means of engagement, multiple means of action and expression, and multiple means of representation" (CAST, 2014(CAST, , 2015(CAST, , 2018, which recognize that students receive and express information in various ways and are motivated to learn in different ways (Meyer et al., 2014). Rooted in universal principles in designing and developing curricula for all students; currently, UDL principles are increasingly widespread, including creating learning for children with disabilities so that all children, without exception, have equal opportunities in the learning (King-Sears, 2009). The UDL framework reduces barriers to teaching as it benefits all students, not only those with disabilities or special needs, as this framework offers guidelines for teachers to choose appropriate support for the students while retaining high expectations for them.
To make the instruction inclusive, the UDL framework offers two key points, namely flexibility and accessibility. Flexibility provides accommodations for students to access learning in an optimum way according to their abilities and preferences (King-Sears, 2009). This flexibility point includes how teachers offer materials in many ways and modes, how teachers provide many ways for students to demonstrate their understanding, and how teachers provide many strategies to make students engage and participate in the learning (Bernacchio & Mullen, 2007;Glass et al., 2013;Hall et al., 2015;Kelly et al., 2022;King-Sears et al., 2015). "Greater flexibility in curriculum and instruction also can increase supportive exchange and interaction between student peers, as well as between students and instructors" (Bernacchio & Mullen, 2007, p. 168). The second point, accessibility, discusses how teachers create an environment that makes students with disabilities able to access learning experiences as their peers. To realize the accessibility in learning, UDL includes proactively utilizing lesson plan designs, pedagogical content knowledge (part of TPACK), and innovation (including ICT) (Basham et al., 2010;King-Sears, 2009).
The increasing adoption and use of assistive technology promote inclusion in educational settings; and increase social, legal, and technical acceptability for students and persons with disabilities outside the school setting. Assistive technology facilitates inclusion, encourages interaction, and strengthens group communication, which can help increase retention of students with disabilities, which has potentially positive implications for students with disabilities to participate more in the classroom. UDL is closely connected to technology, although technology is not absolute for implementing UDL principles. If teachers are willing to apply UDL principles and have significant TPACK provisions, then learning will occur more optimally for all children, including for students with disabilities.
Teachers who use innovation in learning drive a crucial role in the digitization of education. Technology can improve the quality of learning, making the learning process more exciting and interactive. Therefore, teachers must expand and integrate their pedagogical skills with content and technology. This ability is summarized by Schmidt et al. (2009) as TPACK. TPACK is the knowledge that incorporates technology with information about using innovation that is useful for teaching content and information about complex learning concepts (Asad et al., 2021). Teacher-technology relationship knowledge must be developed to understand the use of technology in the education (Voogt et al., 2016). Thus, it is essential to investigate how teachers use technology in teaching and learning, how it benefits them, what challenges and barriers they face when applying it, and their expectations toward technology-based learning.
The development of educational technology in learning has made TPACK a research focus among teachers and scholars in the educational technology areas (Chai et al., 2013). A giant leap from "a techno-centric to a techno-pedagogical" has become the new idea of technology integration approaches in the education (Kabakçı-Yurdakul et al., 2012). Kabakçı-Yurdakul et al. (2012) explain that "the techno-pedagogical integration approach is based on pedagogy and puts pedagogy as well as technology into practice in the integration process", while "the techno-centric integration approach focuses on technology and aims to help teachers acquire the skills and knowledge needed to use various technologies". This techno-pedagogical integration approach is the framework of TPACK.
Previous researchers have developed various types of TPACK scales. Koehler and Mishra (2005) initiated the development of the TPACK survey in 2005, then called TPCK. They created a 14-item survey to assess the evolution of learning and perceptions of postgraduate students and lecturers about the learning atmosphere, theoretical and practical knowledge of technology, content, group dynamics, and TPCK development. The results of Koehler and Mishra's research suggest that a more comprehensive understanding of the relationship between technology, pedagogy, and content, as well as the context in which they function, can be attained through learning by design. In 2006, the TPCK framework published by Mishra and Koehler (2006) was established. In 2009, Archambault & Crippen investigated the TPACK component in online K-12 distance educators in the US. The results of their research show that respondents stated that they mastered the domains of content knowledge (CK), pedagogical knowledge (PK), and pedagogical content knowledge (CPK), but lacked confidence about technological knowledge (TK). In the same year, Schmidt et al. (2009) compiled "Pre-service Teacher Knowledge Survey on Teaching and Technology" in math, science, social studies and literacy. The survey measured seven TPACK constructs of 124 teachers candidates in the US. Graham et al. (2009) tested an instrument to measure confidence levels of teachers in four knowledge areas: TK, TPK, TCK, and TPACK in the SciencePlus Program. Graham et al. found that the instrument assisted program coordinators in checking for improved TPACK confidence over the eight months course of the program. It also helped classroom teachers develop TCK confidence through encouragement to learn content-specific technology used in science. Koh et al. (2010) examined TPACK profiles of 1185 Singaporean teacher candidates using Exploratory Factor Analysis (EFA). They found five different constructs: "technological knowledge, content knowledge, pedagogical knowledge, knowledge of teaching with technology, and knowledge of critical reflection". Sahin (2011) developed a TPACK scale that consists of 47 valid and reliable items with  Voogt et al. (2013) developed seven components of the TPACK framework that can be represented as subscales. Jang and Tsai (2013) involved 1292 science teachers in Taiwan in examining their TPACK profiles, and the results of this study indicate that TPACK is influenced by gender and teaching experience. Male science teachers have higher TPACK than female teachers, and teachers with more teaching experience are statistically proven to have higher PK and CK abilities but lower TK than novice teachers. Moreover, Chai et al. (2013) stated that teacher educators' understanding of the different TPACKs or other influencing factors contributes to teachers' skills in designing and evaluating educational technology programs. Bilici et al. (2013) integrated self-efficacy on the developed TPACK scale and produced 52 highly valid and reliable statements. Bilici et al. (2013) believed that self-efficacy is crucial in integrating technology into learning. In their study, Sang et al. (2016) reported that TPACK was influenced by cultural context, as teachers in different cultural contexts demonstrate different TPACK. Sang et al. developed 42 valid and reliable statements based on analysis using two methods: 229 samples were tested using Exploratory Factor Analysis (EFA) and the remaining 207 using Confirmatory Factor Analysis (CFA). Durdu and Dag (2017) studied the TPACK development of 71 teacher candidates, analyzed their conceptions of learning and teaching with technology, and found that the learning process they implemented positively affected their TPACK development. However, they needed more courses to improve computer-based teaching content and use their content in the microteaching sessions.

Randwick International of Education and Linguistics Science Journal
Considering the challenges of construct validation in previous research (most of which were done with Cronbach's Alpha, EFA, and CFA) and the need for TPACK integrated with UDL to realize inclusive science learning in the Indonesian context, this study aims to develop and validate a scale that empirically measures and describes TPACK-UDL teachers in science use the PLS method. Strictly validated scales are lacking to measure the TPACK-UDL of teachers and pre-service teachers in science. Therefore, the TPACK-UDL Scale needs to be constructed because it has practical implications for teachers, school principals, and the Department of Education related to valid and reliable measurement to capture teacher knowledge and abilities in designing science-inclusive learning, as well as recommendations for future researchers in the field of science education. (2003) were carried out to develop a reliable and valid scale. These steps are: (1) clearly defining constructs to measure, (2) creating a set of items, (3) determining the measurement format, (4) revising the experts reviewed, (5) considering the inclusion of validation items, (6) administering a set of items to samples, (7) re-reviewing items, and (8) optimizing scale length.

Eight development steps by DeVellis
Step 1: Determine the focused construct for the TPACK-UDL measurement. In Step 1, the constructs to be measured were identified, namely the teacher's knowledge and ability to design inclusive science learning based on the two frameworks of TPACK and UDL; and then set the construct's definition. This first step included a thorough literature review of TPACK and UDL. As mentioned earlier, Mishra and Koehler's TPACK model and the CAST UDL framework were adapted as theoretical frameworks. A review of previous instruments (Archambault & Crippen, 2009;Bilici et al., 2013;Graham et al., 2009;Jang & Tsai, 2013;Koh et al., 2010;Sahin, 2011;Sang et al., 2016;Yurdakul, 2018) was developed to assess the TPACK of science teachers yet to integrate UDL as a framework for designing inclusive learning.
Step 1 also determined the target group for the scale, i.e., science teachers at the middle school level. Step 2: Develop statement items. In this step, 60 items were generated based on the TPACK and UDL theoretical frameworks. These items are: Technological Knowledge (TK) = 3 items, Pedagogical Knowledge (PK) = 12 items, Content Knowledge (CK) = 7 items, Technological Pedagogical Knowledge (TPK) = 6 items, Pedagogical Content Knowledge (PCK) = 12 items, Technological Content Knowledge (TCK) = 7 items, Technological Pedagogical and Content Knowledge (TPACK) = 7 items, and Inclusive Education Knowledge (IEK) = 6 items.

Randwick International of Education and Linguistics Science Journal
Step 3: Determine the scale format. Choosing a response format is essential in developing a scale (DeVellis, 2003). Response format 1 -5, from very poor to excellent, was used because it has two extreme sides (positive and negative) and one neutral side.
Step 4: Determine expert opinion to review the initial set of items. Six experts (see Table 1) examined 60 initial TPACK-UDL items. Six experts have expertise in learning technology, measurement and evaluation, and inclusive education and disability. The experts focused on the content of each item developed, including the items' clarity, conciseness, and conformity with what is being measured. The experts provided validation by offering comments and feedback, which were used to revise the items.
Step 5: Consider the addition of validation items. According to DeVellis (2003), it is necessary to determine the validity of several additional item scales to be included in the instrument. At this stage, the knowledge component about inclusive education is included in the scale to reveal the respondent's general knowledge.
Step 6: Field trials. The final version of the TPACK-UDL scale was administered to 42 middle science teachers for field trials. Samples of subjects were randomly selected from five districts in the Special Province of Yogyakarta, namely Yogyakarta Municipality, Sleman, Bantul, Kulonprogo, and Gunung Kidul. The TPACK-UDL Scale was sent via Google form and paper by mail, but most subjects preferred to fill it out via Google form.
Steps 7 and 8: Evaluate items and optimization of scale length. These steps determined the nature of the latent variables underlying the set of items and the internal consistency reliability. A quantitative method was involved in determining the extent to which the validity and reliability of the scale. First, as an initiation test of Pearson's reliability and Cronbach's Alpha validity using Statistical Package for the Social Sciences (SPSS) Version 26 was used to assess the validity of items and the consistency of each TPACK-UDL knowledge domain subscale (TK, PK, CK, PTK, PCK, TCK, TPACK, and IEK). Second, construct validity for each knowledge domain was then analyzed using the Partial Least Square (PLS) method assisted by SmartPLS 4.0 software to assess internal consistency reliability, convergent validity, and discriminant validity. The PLS was used because the scale was administered to a small sample size (N = 42), as suggested by Yamin and Kurniawan (2009) that PLS is an alternative Structural Equation Modeling (SEM) method to evaluate the relationship between variables for a 30 -100 small sample size.

RESULTS AND DISCUSSION
The construction, validity, and reliability tests of the TPACK-UDL Scale will be summarized into three of the eight steps of scale development by DeVellis.

Phase I. Initial Item Review
After the blueprint and item statement of the scale were prepared, the next step was to determine the content and construct validity by involving six-panel experts, i.e.: two content experts (TPACK and UDL), two scale experts, and two inclusive education experts, as presented in Table 1. Content expert 2 UDL 5 -10 3.
Inclusive education expert 2 Inclusive education and disability study > 10 During the panel expert review process, 60 items were sent to six review panels to collect data on clarity and specificity. Each item compiled represents eight TPACK-UDL subdomains plus one inclusive education subdomain. The following are the results of construct validation from expert reviewers. 1) Content expert 1 provided input on three issues, namely reviewing items regarding the "ability to compose assessment instruments included in CK or PK?", "Ability to make scoring guidelines for cognitive assessment and rubrics for affective and psychomotor assessment included in CK or PK?" and "Ability to use various assessment instruments (including alternative assessments) is included in CK or PK?". After exploring the literature regarding TPACK, the teacher's ability to design assessments is included in the PK aspect. PK contains knowledge that must be mastered by teachers in learning, for example, teaching methods, class management, planning lessons, and assessing student activities. 2) Content expert 2 did not provide substantial input, only on sentence constructs to increase the reader's readability and understanding of the more specific statement items. Discussions were carried out with content expert 2 to clarify the feedback provided, and the results of the discussions were used to revise the statement items that had been given feedback. 3) Scale expert 1 provided feedback in the form of: "Basically, all the items are relevant. But, a more straightforward level of state statement is needed whether the teacher is able or unable to design learning objectives so that it is possible for there to be two types of statement items, namely positive items and negative items with the opposite scoring. This input was not followed up, considering that the statement items totaled 60, and if it were made into positive and negative items, it would be 120. This could bias the results because there were too many statement items. 4) Scale expert 2 and inclusive education experts 1 and 2 did not provide feedback on the draft questionnaire provided. After revising the scale based on experts' feedback, the next step was to conduct a full-scale field test on science teachers randomly selected and willing to fill out the scale.

Phase II. Full-Scale Field Test
The full-scale field test was used to conduct trials on a scale that had been revised/validated based on expert feedback in the previous stage. Table 2 illustrates the demography of teachers as a pilot sample. A total of 42 science teachers were involved in this scale trial. They previously asked for approval to participate in this study. Twenty-nine teachers are from middle public and private schools, and the remaining 13 are from Islamic middle schools. The majority of teachers (45.24%) have teaching experience > 10 years, but only 30.95% of them had experience teaching students with disabilities, and the most common type of disability of their students is physical disabilities (46.16%). The 42 science teachers were delivered the scale via the Google form (https://forms.gle/RTZKYzgiWuFUKxqJ9) and by a paper. The initial evaluation process of the TPACK-UDL Scale is to perform reliability and validity tests. This study conducted the Alpha (Cronbach) model for reliability to check the internal consistency based on the average inter-item correlation and the Pearson Product Moment Correlations to check the item's validity. SPSS Version 26 was used to perform the tests. The TPACK-UDL Scale reliability test shows a Cronbach's Alpha value of 0.971 (see Table  3), which means that the TPACK-UDL Scale has an internal consistency of 97.1%. This figure indicates that there is a 97.1% certainty of the consistency of statement items in producing more or less the same data repeatedly, so it is said that the TPACK-UDL instrument has very high reliability. For validity, the item is declared as valid if the significant value (Sig. (2-tailed)) < 0.05. of the Pearson Correlation, as shown in Table 4.  .000 valid..  Table 4 shows one invalid item (PK-11), where the Sig. (2-tailed) 0.305 > 0.05. The rest 59 items have Sig. (2-tailed) < 0.05, so they are declared valid. After testing the validity and reliability using statistical tests, the next step was to assess the reflective construct of the TPACK-UDL Scale using the PLS method. This test includes "indicator reliability, internal consistency reliability (Cronbach's alpha, reliability coefficient A, and composite reliability rhoC), convergent validity, and discriminant validity" (Hair et al., 2021, p. 13). The summary of these tests is shown in Table 5. PK-3 0.780 PK-4 0.743 PK-5 0.762 PK-6 0.706 PK-8 0.759 PK-10 0.820 The first test assessed indicator reliability by squaring the outer loadings of reflecting structures and evaluating load indications. Indicator reliability describes "how much of each indicator's variance is explained by its construct" (Hair et al., 2021, p. 77). According to Hair et al. (2021), indicator loadings greater than 0.708 can be accepted because they imply -349-that the construct explains more than half of the variance in the indicator. As a result, objects with low outer loading values are deleted consecutively. The test was repeated twice to acquire the findings shown in Table 5. The outer loading of twelve indicators: IEK-6 (0.436), PCK-6 (0.616), PCK-8 (0.443), PCK-11 (0.566), PCK-12 (0.657), PK-2 (0.494), PK-7 (0.496), PK-9 (0.605), PK-11 (0.166), PK-12 (0.488), TPK-5 (0.417), TPK-6 (0.601) is less than the threshold level of 0.70; therefore these 12 indicators then were deleted to increase the construct's AVE values. The remaining 48 indicators with recognized factor loadings indicate that eight constructs in TPACK-UDL explain more than 50 percent of the variance of the indicators, which means indicator reliability was acceptable.

Randwick International of Education and Linguistics Science Journal
The second test assessed internal consistency reliability, which "is the extent to which indicators measuring the same construct are associated with each other" (Hair et al., 2021, p. 77), using three indexes, i.e., Cronbach's alpha, ρA, and composite reliability (CR). In general, larger values indicate greater dependability. Reliability values between 0.60 and 0.70 are deemed "acceptable" in exploratory research; thus, values between 0.70 and 0.90 range from "satisfactory to good." On the other hand, a value of 0.95 or higher is undesirable since it shows duplicate elements that will undermine the construct validity (Diamantopoulos et al., 2012). Composite reliability is considered more accurate than Cronbach's alpha in measuring a construct's internal consistency because Cronbach's alpha assesses the lower limit of a construct's reliability value. In contrast, CR evaluates the actual value of a construct's dependability. The CR value should be larger than 0.70, and Cronbach's alpha value should be greater than 0.70 (Ghozali, 2016). Furthermore, Dijkstra and Henseler (2015) suggested the reliability coefficient A as an approximation of construct reliability, which often sits between Cronbach's alpha and the CR. As a result, the report on the reliability of the TPACK-UDL scale in this study will include the coefficients of Cronbach's alpha, A, and CR. According to Table 5, the Cronbach's alpha, A, and CR coefficient values for all constructs in TPACK-UDL are minimum 0.878, 0.887, 0.905, and maximum 0.938, 0.943, 0.950, indicating that it passed the internal consistency reliability acceptance value recommended by Hair et al. (2019) and no redundant items in each construct.
The third phase was to report on each concept measure's convergent validity. "Convergent validity is the extent to which the construct converges in order to explain the variance of its indicators" (Hair et al., 2021, p. 78), which means that the validity of each relationship between indicators and constructs or latent variables must be determined. In PLS-SEM, there are two forms of validity: convergent and discriminant. A set of indications indicating one latent variable and the underlying latent variable is referred to as convergent validity. This form is characterized by unidimensionality, which is expressed using the Average Variance Extracted (AVE) value. The AVE value must be greater than 0.5. This value denotes appropriate convergent validity, which suggests that on average, one latent variable can explain more than half of the variation of its indicators (Ghozali, 2016). Table  5 shows that after deleting 12 items, the values of outer loading, Cronbach's alpha, A, CR, and AVE for each construct and indicator reached an acceptable and satisfactory level. All constructs have an AVE value larger than 0.50, with the PK variable having the most negligible value of 0.576 and the TK variable having the highest value of 0.857. This number satisfies the requirements based on the provided minimum AVE value limit of 0.50. All of the constructs met the criteria for reliability and convergent validity. The last step was to evaluate discriminant validity, which "measures the extent to which a construct is empirically distinct from other constructs in the structural model" (Hair et al., 2021, p. 78). The heterotrait-monotrait ratio (HTMT) criteria proposed by Henseler et al. (2016) are used to assess discriminant validity in this study. The appropriate HTMT cutoff value is 0.90 (Hair et al., 2021). Table 6 reveals that one HTMT value is more than 0.90 (TPK-CK = 0.916), indicating that discriminant validity is absent.
The TPACK-UDL scale has been successfully developed following the DeVellis procedure. Eight aspects are divided into 48 statement items. They have been tested for validity and reliability so that they can be used to reveal teacher knowledge related to technology, pedagogy, and content in designing inclusive learning. Preliminary testing with the Cronbach Alpha validity test and Pearson reliability determined that one item failed and the reliability value was very high, while the PLS test yielded 12 items dropped. These two analytical techniques show that the results of the PLS test are more stringent. In addition, the results of this test indicate that the TPACK-UDL Scale is a promising tool for measuring preservice teachers' knowledge in designing inclusive learning. Despite the small sample size in the study (N = 42), this study indicates that a reliable measure of the TPACK-UDL and its related knowledge domains is generated. The scale produced from this study is distinctive as the UDL component is integrated into seven aspects of TPACK (TK, PK, CK, TCK, PCK, TPK, and TPACK), plus one aspect regarding inclusive education knowledge (IEK). Therefore, developing the TPACK-UDL Scale is essential because teachers need to realize that students in their classes are diverse. This diversity should be used as the basis and basis for designing, developing, and implementing inclusive learning.
In addition, teacher education continues to experience significant changes, where it is only possible to plan and implement lessons by considering the inclusion of ICT. The flow of information and knowledge also runs very quickly and massively, so more than knowledge of the latest technological gadgets is undoubtedly needed, and what is even more critical is information literacy in science learning. Teachers and students must be able to engage with a variety of learning technologies to gain learning effectiveness.

CONCLUSION
Rapid technology development in learning requires science teachers to be ready to face this challenge. Comprehensive knowledge, skills, attitude, and beliefs are needed to design techbased education for all students, including those with disabilities. The TPACK-UDL Scale offers a credible and theoretically cohesive conceptual framework to prepare pre-service science teachers to incorporate technology while teaching a diverse range of learners effectively. This research is critical for more correctly and clearly defining the TPACK-UDL for science teachers.
The findings of this study indicate that teachers' comprehensive understanding of technology, pedagogy, and content as defined by the TPACK framework can be infused with the UDL framework to achieve genuine inclusive learning. Furthermore, this study has implications for the necessity of providing prospective science teachers with the knowledge and abilities necessary to create inclusive scientific learning in accordance with the TPACK-UDL framework. The scale produced in this study is scientifically appropriate to measure and help pre-science teachers' knowledge in designing inclusive science learning; to evaluate pre-service science teachers' knowledge development in teaching programs. This scale can be used to plan and implement methods to assist the development of TPACK-UDL for prospective science teachers at academic staff educational institutions (LPTK) and institutions delivering the Pre-Service Teacher Professional Program (PPG). This scale can also be applied to science fields such as biology, chemistry, and physics.
The development of the TPACK-UDL Scale has various limitations, namely, only 42 people in the field trial subject, although this instrument has been shown to have high validity and reliability. A wide range of test subjects is possible to increase the internal consistency of the scale.